13:21:02 Triggered by Gerrit: https://git.opendaylight.org/gerrit/c/transportpce/+/120829 13:21:02 Running as SYSTEM 13:21:02 [EnvInject] - Loading node environment variables. 13:21:02 Building remotely on prd-ubuntu2204-docker-4c-16g-23837 (ubuntu2204-docker-4c-16g) in workspace /w/workspace/transportpce-tox-verify-transportpce-master 13:21:02 [ssh-agent] Looking for ssh-agent implementation... 13:21:03 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 13:21:03 $ ssh-agent 13:21:03 SSH_AUTH_SOCK=/tmp/ssh-XXXXXXGdWTCn/agent.1580 13:21:03 SSH_AGENT_PID=1582 13:21:03 [ssh-agent] Started. 13:21:03 Running ssh-add (command line suppressed) 13:21:03 Identity added: /w/workspace/transportpce-tox-verify-transportpce-master@tmp/private_key_9487107058746623191.key (/w/workspace/transportpce-tox-verify-transportpce-master@tmp/private_key_9487107058746623191.key) 13:21:03 [ssh-agent] Using credentials jenkins (jenkins-ssh) 13:21:03 The recommended git tool is: NONE 13:21:04 using credential jenkins-ssh 13:21:04 Wiping out workspace first. 13:21:04 Cloning the remote Git repository 13:21:04 Cloning repository git://devvexx.opendaylight.org/mirror/transportpce 13:21:04 > git init /w/workspace/transportpce-tox-verify-transportpce-master # timeout=10 13:21:05 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/transportpce 13:21:05 > git --version # timeout=10 13:21:05 > git --version # 'git version 2.34.1' 13:21:05 using GIT_SSH to set credentials jenkins-ssh 13:21:05 Verifying host key using known hosts file 13:21:05 You're using 'Known hosts file' strategy to verify ssh host keys, but your known_hosts file does not exist, please go to 'Manage Jenkins' -> 'Security' -> 'Git Host Key Verification Configuration' and configure host key verification. 13:21:05 > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/transportpce +refs/heads/*:refs/remotes/origin/* # timeout=10 13:21:08 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/transportpce # timeout=10 13:21:08 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 13:21:09 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/transportpce # timeout=10 13:21:09 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/transportpce 13:21:09 using GIT_SSH to set credentials jenkins-ssh 13:21:09 Verifying host key using known hosts file 13:21:09 You're using 'Known hosts file' strategy to verify ssh host keys, but your known_hosts file does not exist, please go to 'Manage Jenkins' -> 'Security' -> 'Git Host Key Verification Configuration' and configure host key verification. 13:21:09 > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/transportpce refs/changes/29/120829/4 # timeout=10 13:21:09 > git rev-parse 06062df60fc4714174367099c18c1d5b73ff15fc^{commit} # timeout=10 13:21:09 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script 13:21:09 Checking out Revision 06062df60fc4714174367099c18c1d5b73ff15fc (refs/changes/29/120829/4) 13:21:09 > git config core.sparsecheckout # timeout=10 13:21:09 > git checkout -f 06062df60fc4714174367099c18c1d5b73ff15fc # timeout=10 13:21:09 Commit message: "Support for openconfig 2.0" 13:21:09 > git rev-parse FETCH_HEAD^{commit} # timeout=10 13:21:09 > git rev-list --no-walk 509d781065379100eb9da8d0414bc0043a05ebc0 # timeout=10 13:21:09 > git remote # timeout=10 13:21:09 > git submodule init # timeout=10 13:21:09 > git submodule sync # timeout=10 13:21:09 > git config --get remote.origin.url # timeout=10 13:21:09 > git submodule init # timeout=10 13:21:09 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 13:21:09 ERROR: No submodules found. 13:21:13 provisioning config files... 13:21:13 copy managed file [npmrc] to file:/home/jenkins/.npmrc 13:21:13 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 13:21:13 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins13580988455138970272.sh 13:21:13 ---> python-tools-install.sh 13:21:13 Setup pyenv: 13:21:13 * system (set by /opt/pyenv/version) 13:21:13 * 3.8.20 (set by /opt/pyenv/version) 13:21:13 * 3.9.20 (set by /opt/pyenv/version) 13:21:13 3.10.15 13:21:13 3.11.10 13:21:17 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-laWX 13:21:17 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 13:21:17 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 13:21:17 lf-activate-venv(): INFO: Attempting to install with network-safe options... 13:21:22 lf-activate-venv(): INFO: Base packages installed successfully 13:21:22 lf-activate-venv(): INFO: Installing additional packages: lftools 13:21:48 lf-activate-venv(): INFO: Adding /tmp/venv-laWX/bin to PATH 13:21:48 Generating Requirements File 13:22:07 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. 13:22:07 httplib2 0.30.2 requires pyparsing<4,>=3.0.4, but you have pyparsing 2.4.7 which is incompatible. 13:22:07 Python 3.11.10 13:22:07 pip 26.0.1 from /tmp/venv-laWX/lib/python3.11/site-packages/pip (python 3.11) 13:22:08 appdirs==1.4.4 13:22:08 argcomplete==3.6.3 13:22:08 aspy.yaml==1.3.0 13:22:08 attrs==25.4.0 13:22:08 autopage==0.6.0 13:22:08 beautifulsoup4==4.14.3 13:22:08 boto3==1.42.58 13:22:08 botocore==1.42.58 13:22:08 bs4==0.0.2 13:22:08 certifi==2026.2.25 13:22:08 cffi==2.0.0 13:22:08 cfgv==3.5.0 13:22:08 chardet==6.0.0.post1 13:22:08 charset-normalizer==3.4.4 13:22:08 click==8.3.1 13:22:08 cliff==4.13.2 13:22:08 cmd2==3.2.2 13:22:08 cryptography==3.3.2 13:22:08 debtcollector==3.0.0 13:22:08 decorator==5.2.1 13:22:08 defusedxml==0.7.1 13:22:08 Deprecated==1.3.1 13:22:08 distlib==0.4.0 13:22:08 dnspython==2.8.0 13:22:08 docker==7.1.0 13:22:08 dogpile.cache==1.5.0 13:22:08 durationpy==0.10 13:22:08 email-validator==2.3.0 13:22:08 filelock==3.24.3 13:22:08 future==1.0.0 13:22:08 gitdb==4.0.12 13:22:08 GitPython==3.1.46 13:22:08 httplib2==0.30.2 13:22:08 identify==2.6.16 13:22:08 idna==3.11 13:22:08 importlib-resources==1.5.0 13:22:08 iso8601==2.1.0 13:22:08 Jinja2==3.1.6 13:22:08 jmespath==1.1.0 13:22:08 jsonpatch==1.33 13:22:08 jsonpointer==3.0.0 13:22:08 jsonschema==4.26.0 13:22:08 jsonschema-specifications==2025.9.1 13:22:08 keystoneauth1==5.13.1 13:22:08 kubernetes==35.0.0 13:22:08 lftools==0.37.21 13:22:08 lxml==6.0.2 13:22:08 markdown-it-py==4.0.0 13:22:08 MarkupSafe==3.0.3 13:22:08 mdurl==0.1.2 13:22:08 msgpack==1.1.2 13:22:08 multi_key_dict==2.0.3 13:22:08 munch==4.0.0 13:22:08 netaddr==1.3.0 13:22:08 niet==1.4.2 13:22:08 nodeenv==1.10.0 13:22:08 oauth2client==4.1.3 13:22:08 oauthlib==3.3.1 13:22:08 openstacksdk==4.10.0 13:22:08 os-service-types==1.8.2 13:22:08 osc-lib==4.4.0 13:22:08 oslo.config==10.3.0 13:22:08 oslo.context==6.3.0 13:22:08 oslo.i18n==6.7.2 13:22:08 oslo.log==8.1.0 13:22:08 oslo.serialization==5.9.1 13:22:08 oslo.utils==10.0.0 13:22:08 packaging==26.0 13:22:08 pbr==7.0.3 13:22:08 platformdirs==4.9.2 13:22:08 prettytable==3.17.0 13:22:08 psutil==7.2.2 13:22:08 pyasn1==0.6.2 13:22:08 pyasn1_modules==0.4.2 13:22:08 pycparser==3.0 13:22:08 pygerrit2==2.0.15 13:22:08 PyGithub==2.8.1 13:22:08 Pygments==2.19.2 13:22:08 PyJWT==2.11.0 13:22:08 PyNaCl==1.6.2 13:22:08 pyparsing==2.4.7 13:22:08 pyperclip==1.11.0 13:22:08 pyrsistent==0.20.0 13:22:08 python-cinderclient==9.8.0 13:22:08 python-dateutil==2.9.0.post0 13:22:08 python-discovery==1.1.0 13:22:08 python-heatclient==5.1.0 13:22:08 python-jenkins==1.8.3 13:22:08 python-keystoneclient==5.7.0 13:22:08 python-magnumclient==4.9.0 13:22:08 python-openstackclient==9.0.0 13:22:08 python-swiftclient==4.10.0 13:22:08 PyYAML==6.0.3 13:22:08 referencing==0.37.0 13:22:08 requests==2.32.5 13:22:08 requests-oauthlib==2.0.0 13:22:08 requestsexceptions==1.4.0 13:22:08 rfc3986==2.0.0 13:22:08 rich==14.3.3 13:22:08 rich-argparse==1.7.2 13:22:08 rpds-py==0.30.0 13:22:08 rsa==4.9.1 13:22:08 ruamel.yaml==0.19.1 13:22:08 ruamel.yaml.clib==0.2.15 13:22:08 s3transfer==0.16.0 13:22:08 simplejson==3.20.2 13:22:08 six==1.17.0 13:22:08 smmap==5.0.2 13:22:08 soupsieve==2.8.3 13:22:08 stevedore==5.7.0 13:22:08 tabulate==0.9.0 13:22:08 toml==0.10.2 13:22:08 tomlkit==0.14.0 13:22:08 tqdm==4.67.3 13:22:08 typing_extensions==4.15.0 13:22:08 urllib3==1.26.20 13:22:08 virtualenv==21.1.0 13:22:08 wcwidth==0.6.0 13:22:08 websocket-client==1.9.0 13:22:08 wrapt==2.1.1 13:22:08 xdg==6.0.0 13:22:08 xmltodict==1.0.4 13:22:08 yq==3.4.3 13:22:08 [EnvInject] - Injecting environment variables from a build step. 13:22:08 [EnvInject] - Injecting as environment variables the properties content 13:22:08 PYTHON=python3 13:22:08 13:22:08 [EnvInject] - Variables injected successfully. 13:22:08 [transportpce-tox-verify-transportpce-master] $ /bin/bash -l /tmp/jenkins9521891575017309866.sh 13:22:08 ---> tox-install.sh 13:22:08 + source /home/jenkins/lf-env.sh 13:22:08 + lf-activate-venv --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 13:22:08 ++ mktemp -d /tmp/venv-XXXX 13:22:08 + lf_venv=/tmp/venv-U8pL 13:22:08 + local venv_file=/tmp/.os_lf_venv 13:22:08 + local python=python3 13:22:08 + local options 13:22:08 + local set_path=true 13:22:08 + local install_args= 13:22:08 ++ getopt -o np:v: -l no-path,system-site-packages,python:,venv-file: -n lf-activate-venv -- --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 13:22:08 + options=' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 13:22:08 + eval set -- ' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 13:22:08 ++ set -- --venv-file /tmp/.toxenv -- tox virtualenv urllib3~=1.26.15 13:22:08 + true 13:22:08 + case $1 in 13:22:08 + venv_file=/tmp/.toxenv 13:22:08 + shift 2 13:22:08 + true 13:22:08 + case $1 in 13:22:08 + shift 13:22:08 + break 13:22:08 + case $python in 13:22:08 + local pkg_list= 13:22:08 + [[ -d /opt/pyenv ]] 13:22:08 + echo 'Setup pyenv:' 13:22:08 Setup pyenv: 13:22:08 + export PYENV_ROOT=/opt/pyenv 13:22:08 + PYENV_ROOT=/opt/pyenv 13:22:08 + export PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 13:22:08 + PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 13:22:08 + pyenv versions 13:22:08 system 13:22:08 3.8.20 13:22:08 3.9.20 13:22:08 3.10.15 13:22:08 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 13:22:08 + command -v pyenv 13:22:08 ++ pyenv init - --no-rehash 13:22:08 + eval 'PATH="$(bash --norc -ec '\''IFS=:; paths=($PATH); 13:22:08 for i in ${!paths[@]}; do 13:22:08 if [[ ${paths[i]} == "'\'''\''/opt/pyenv/shims'\'''\''" ]]; then unset '\''\'\'''\''paths[i]'\''\'\'''\''; 13:22:08 fi; done; 13:22:08 echo "${paths[*]}"'\'')" 13:22:08 export PATH="/opt/pyenv/shims:${PATH}" 13:22:08 export PYENV_SHELL=bash 13:22:08 source '\''/opt/pyenv/libexec/../completions/pyenv.bash'\'' 13:22:08 pyenv() { 13:22:08 local command 13:22:08 command="${1:-}" 13:22:08 if [ "$#" -gt 0 ]; then 13:22:08 shift 13:22:08 fi 13:22:08 13:22:08 case "$command" in 13:22:08 rehash|shell) 13:22:08 eval "$(pyenv "sh-$command" "$@")" 13:22:08 ;; 13:22:08 *) 13:22:08 command pyenv "$command" "$@" 13:22:08 ;; 13:22:08 esac 13:22:08 }' 13:22:08 +++ bash --norc -ec 'IFS=:; paths=($PATH); 13:22:08 for i in ${!paths[@]}; do 13:22:08 if [[ ${paths[i]} == "/opt/pyenv/shims" ]]; then unset '\''paths[i]'\''; 13:22:08 fi; done; 13:22:08 echo "${paths[*]}"' 13:22:08 ++ PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 13:22:08 ++ export PATH=/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 13:22:08 ++ PATH=/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 13:22:08 ++ export PYENV_SHELL=bash 13:22:08 ++ PYENV_SHELL=bash 13:22:08 ++ source /opt/pyenv/libexec/../completions/pyenv.bash 13:22:08 +++ complete -F _pyenv pyenv 13:22:08 ++ lf-pyver python3 13:22:08 ++ local py_version_xy=python3 13:22:08 ++ local py_version_xyz= 13:22:08 ++ pyenv versions 13:22:08 ++ local command 13:22:08 ++ command=versions 13:22:08 ++ '[' 1 -gt 0 ']' 13:22:08 ++ shift 13:22:08 ++ case "$command" in 13:22:08 ++ command pyenv versions 13:22:08 ++ sed 's/^[ *]* //' 13:22:08 ++ grep -E '^[0-9.]*[0-9]$' 13:22:08 ++ awk '{ print $1 }' 13:22:08 ++ [[ ! -s /tmp/.pyenv_versions ]] 13:22:08 +++ grep '^3' /tmp/.pyenv_versions 13:22:08 +++ sort -V 13:22:08 +++ tail -n 1 13:22:08 ++ py_version_xyz=3.11.10 13:22:08 ++ [[ -z 3.11.10 ]] 13:22:08 ++ echo 3.11.10 13:22:08 ++ return 0 13:22:08 + pyenv local 3.11.10 13:22:08 + local command 13:22:08 + command=local 13:22:08 + '[' 2 -gt 0 ']' 13:22:08 + shift 13:22:08 + case "$command" in 13:22:08 + command pyenv local 3.11.10 13:22:08 + for arg in "$@" 13:22:08 + case $arg in 13:22:08 + pkg_list+='tox ' 13:22:08 + for arg in "$@" 13:22:08 + case $arg in 13:22:08 + pkg_list+='virtualenv ' 13:22:08 + for arg in "$@" 13:22:08 + case $arg in 13:22:08 + pkg_list+='urllib3~=1.26.15 ' 13:22:08 + [[ -f /tmp/.toxenv ]] 13:22:08 + [[ ! -f /tmp/.toxenv ]] 13:22:08 + [[ -n '' ]] 13:22:08 + python3 -m venv /tmp/venv-U8pL 13:22:12 + echo 'lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-U8pL' 13:22:12 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-U8pL 13:22:12 + echo /tmp/venv-U8pL 13:22:12 + echo 'lf-activate-venv(): INFO: Save venv in file: /tmp/.toxenv' 13:22:12 lf-activate-venv(): INFO: Save venv in file: /tmp/.toxenv 13:22:12 + echo 'lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv)' 13:22:12 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 13:22:12 + local 'pip_opts=--upgrade --quiet' 13:22:12 + pip_opts='--upgrade --quiet --trusted-host pypi.org' 13:22:12 + pip_opts='--upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org' 13:22:12 + pip_opts='--upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org --trusted-host pypi.python.org' 13:22:12 + [[ -n '' ]] 13:22:12 + [[ -n '' ]] 13:22:12 + echo 'lf-activate-venv(): INFO: Attempting to install with network-safe options...' 13:22:12 lf-activate-venv(): INFO: Attempting to install with network-safe options... 13:22:12 + /tmp/venv-U8pL/bin/python3 -m pip install --upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org --trusted-host pypi.python.org pip 'setuptools<66' virtualenv 13:22:16 + echo 'lf-activate-venv(): INFO: Base packages installed successfully' 13:22:16 lf-activate-venv(): INFO: Base packages installed successfully 13:22:16 + [[ -z tox virtualenv urllib3~=1.26.15 ]] 13:22:16 + echo 'lf-activate-venv(): INFO: Installing additional packages: tox virtualenv urllib3~=1.26.15 ' 13:22:16 lf-activate-venv(): INFO: Installing additional packages: tox virtualenv urllib3~=1.26.15 13:22:16 + /tmp/venv-U8pL/bin/python3 -m pip install --upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org --trusted-host pypi.python.org --upgrade-strategy eager tox virtualenv urllib3~=1.26.15 13:22:18 + type python3 13:22:18 + true 13:22:18 + echo 'lf-activate-venv(): INFO: Adding /tmp/venv-U8pL/bin to PATH' 13:22:18 lf-activate-venv(): INFO: Adding /tmp/venv-U8pL/bin to PATH 13:22:18 + PATH=/tmp/venv-U8pL/bin:/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 13:22:18 + return 0 13:22:18 + python3 --version 13:22:18 Python 3.11.10 13:22:18 + python3 -m pip --version 13:22:18 pip 26.0.1 from /tmp/venv-U8pL/lib/python3.11/site-packages/pip (python 3.11) 13:22:18 + python3 -m pip freeze 13:22:18 cachetools==7.0.1 13:22:18 colorama==0.4.6 13:22:18 distlib==0.4.0 13:22:18 filelock==3.24.3 13:22:18 packaging==26.0 13:22:18 platformdirs==4.9.2 13:22:18 pluggy==1.6.0 13:22:18 pyproject-api==1.10.0 13:22:18 python-discovery==1.1.0 13:22:18 tox==4.46.3 13:22:18 urllib3==1.26.20 13:22:18 virtualenv==21.1.0 13:22:18 [transportpce-tox-verify-transportpce-master] $ /bin/sh -xe /tmp/jenkins6507738507594179722.sh 13:22:18 [EnvInject] - Injecting environment variables from a build step. 13:22:18 [EnvInject] - Injecting as environment variables the properties content 13:22:18 PARALLEL=True 13:22:18 13:22:18 [EnvInject] - Variables injected successfully. 13:22:18 [transportpce-tox-verify-transportpce-master] $ /bin/bash -l /tmp/jenkins10455297071939120800.sh 13:22:18 ---> tox-run.sh 13:22:18 + PATH=/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 13:22:18 + ARCHIVE_TOX_DIR=/w/workspace/transportpce-tox-verify-transportpce-master/archives/tox 13:22:18 + ARCHIVE_DOC_DIR=/w/workspace/transportpce-tox-verify-transportpce-master/archives/docs 13:22:18 + mkdir -p /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox 13:22:18 + cd /w/workspace/transportpce-tox-verify-transportpce-master/. 13:22:18 + source /home/jenkins/lf-env.sh 13:22:18 + lf-activate-venv --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 13:22:18 ++ mktemp -d /tmp/venv-XXXX 13:22:18 + lf_venv=/tmp/venv-qGQ5 13:22:18 + local venv_file=/tmp/.os_lf_venv 13:22:18 + local python=python3 13:22:18 + local options 13:22:18 + local set_path=true 13:22:18 + local install_args= 13:22:18 ++ getopt -o np:v: -l no-path,system-site-packages,python:,venv-file: -n lf-activate-venv -- --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 13:22:18 + options=' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 13:22:18 + eval set -- ' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 13:22:18 ++ set -- --venv-file /tmp/.toxenv -- tox virtualenv urllib3~=1.26.15 13:22:18 + true 13:22:18 + case $1 in 13:22:18 + venv_file=/tmp/.toxenv 13:22:18 + shift 2 13:22:18 + true 13:22:18 + case $1 in 13:22:18 + shift 13:22:18 + break 13:22:18 + case $python in 13:22:18 + local pkg_list= 13:22:18 + [[ -d /opt/pyenv ]] 13:22:18 + echo 'Setup pyenv:' 13:22:18 Setup pyenv: 13:22:18 + export PYENV_ROOT=/opt/pyenv 13:22:18 + PYENV_ROOT=/opt/pyenv 13:22:18 + export PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 13:22:18 + PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 13:22:18 + pyenv versions 13:22:18 system 13:22:18 3.8.20 13:22:18 3.9.20 13:22:18 3.10.15 13:22:18 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 13:22:18 + command -v pyenv 13:22:18 ++ pyenv init - --no-rehash 13:22:18 + eval 'PATH="$(bash --norc -ec '\''IFS=:; paths=($PATH); 13:22:18 for i in ${!paths[@]}; do 13:22:18 if [[ ${paths[i]} == "'\'''\''/opt/pyenv/shims'\'''\''" ]]; then unset '\''\'\'''\''paths[i]'\''\'\'''\''; 13:22:18 fi; done; 13:22:18 echo "${paths[*]}"'\'')" 13:22:18 export PATH="/opt/pyenv/shims:${PATH}" 13:22:18 export PYENV_SHELL=bash 13:22:18 source '\''/opt/pyenv/libexec/../completions/pyenv.bash'\'' 13:22:18 pyenv() { 13:22:18 local command 13:22:18 command="${1:-}" 13:22:18 if [ "$#" -gt 0 ]; then 13:22:18 shift 13:22:18 fi 13:22:18 13:22:18 case "$command" in 13:22:18 rehash|shell) 13:22:18 eval "$(pyenv "sh-$command" "$@")" 13:22:18 ;; 13:22:18 *) 13:22:18 command pyenv "$command" "$@" 13:22:18 ;; 13:22:18 esac 13:22:18 }' 13:22:18 +++ bash --norc -ec 'IFS=:; paths=($PATH); 13:22:18 for i in ${!paths[@]}; do 13:22:18 if [[ ${paths[i]} == "/opt/pyenv/shims" ]]; then unset '\''paths[i]'\''; 13:22:18 fi; done; 13:22:18 echo "${paths[*]}"' 13:22:18 ++ PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 13:22:18 ++ export PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 13:22:18 ++ PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 13:22:18 ++ export PYENV_SHELL=bash 13:22:18 ++ PYENV_SHELL=bash 13:22:18 ++ source /opt/pyenv/libexec/../completions/pyenv.bash 13:22:18 +++ complete -F _pyenv pyenv 13:22:18 ++ lf-pyver python3 13:22:18 ++ local py_version_xy=python3 13:22:18 ++ local py_version_xyz= 13:22:18 ++ pyenv versions 13:22:18 ++ sed 's/^[ *]* //' 13:22:18 ++ local command 13:22:18 ++ grep -E '^[0-9.]*[0-9]$' 13:22:18 ++ command=versions 13:22:18 ++ '[' 1 -gt 0 ']' 13:22:18 ++ shift 13:22:18 ++ case "$command" in 13:22:18 ++ command pyenv versions 13:22:18 ++ awk '{ print $1 }' 13:22:18 ++ [[ ! -s /tmp/.pyenv_versions ]] 13:22:18 +++ sort -V 13:22:18 +++ grep '^3' /tmp/.pyenv_versions 13:22:18 +++ tail -n 1 13:22:18 ++ py_version_xyz=3.11.10 13:22:18 ++ [[ -z 3.11.10 ]] 13:22:18 ++ echo 3.11.10 13:22:18 ++ return 0 13:22:18 + pyenv local 3.11.10 13:22:18 + local command 13:22:18 + command=local 13:22:18 + '[' 2 -gt 0 ']' 13:22:18 + shift 13:22:18 + case "$command" in 13:22:18 + command pyenv local 3.11.10 13:22:18 + for arg in "$@" 13:22:18 + case $arg in 13:22:18 + pkg_list+='tox ' 13:22:18 + for arg in "$@" 13:22:18 + case $arg in 13:22:18 + pkg_list+='virtualenv ' 13:22:18 + for arg in "$@" 13:22:18 + case $arg in 13:22:18 + pkg_list+='urllib3~=1.26.15 ' 13:22:18 + [[ -f /tmp/.toxenv ]] 13:22:18 ++ cat /tmp/.toxenv 13:22:18 + lf_venv=/tmp/venv-U8pL 13:22:18 + echo 'lf-activate-venv(): INFO: Reuse venv:/tmp/venv-U8pL from' file:/tmp/.toxenv 13:22:18 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-U8pL from file:/tmp/.toxenv 13:22:18 + echo 'lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv)' 13:22:18 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 13:22:18 + local 'pip_opts=--upgrade --quiet' 13:22:18 + pip_opts='--upgrade --quiet --trusted-host pypi.org' 13:22:18 + pip_opts='--upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org' 13:22:18 + pip_opts='--upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org --trusted-host pypi.python.org' 13:22:18 + [[ -n '' ]] 13:22:18 + [[ -n '' ]] 13:22:18 + echo 'lf-activate-venv(): INFO: Attempting to install with network-safe options...' 13:22:18 lf-activate-venv(): INFO: Attempting to install with network-safe options... 13:22:18 + /tmp/venv-U8pL/bin/python3 -m pip install --upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org --trusted-host pypi.python.org pip 'setuptools<66' virtualenv 13:22:19 + echo 'lf-activate-venv(): INFO: Base packages installed successfully' 13:22:19 lf-activate-venv(): INFO: Base packages installed successfully 13:22:19 + [[ -z tox virtualenv urllib3~=1.26.15 ]] 13:22:19 + echo 'lf-activate-venv(): INFO: Installing additional packages: tox virtualenv urllib3~=1.26.15 ' 13:22:19 lf-activate-venv(): INFO: Installing additional packages: tox virtualenv urllib3~=1.26.15 13:22:19 + /tmp/venv-U8pL/bin/python3 -m pip install --upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org --trusted-host pypi.python.org --upgrade-strategy eager tox virtualenv urllib3~=1.26.15 13:22:20 + type python3 13:22:20 + true 13:22:20 + echo 'lf-activate-venv(): INFO: Adding /tmp/venv-U8pL/bin to PATH' 13:22:20 lf-activate-venv(): INFO: Adding /tmp/venv-U8pL/bin to PATH 13:22:20 + PATH=/tmp/venv-U8pL/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 13:22:20 + return 0 13:22:20 + [[ -d /opt/pyenv ]] 13:22:20 + echo '---> Setting up pyenv' 13:22:20 ---> Setting up pyenv 13:22:20 + export PYENV_ROOT=/opt/pyenv 13:22:20 + PYENV_ROOT=/opt/pyenv 13:22:20 + export PATH=/opt/pyenv/bin:/tmp/venv-U8pL/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 13:22:20 + PATH=/opt/pyenv/bin:/tmp/venv-U8pL/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 13:22:20 ++ pwd 13:22:20 + PYTHONPATH=/w/workspace/transportpce-tox-verify-transportpce-master 13:22:20 + export PYTHONPATH 13:22:20 + export TOX_TESTENV_PASSENV=PYTHONPATH 13:22:20 + TOX_TESTENV_PASSENV=PYTHONPATH 13:22:20 + tox --version 13:22:21 4.46.3 from /tmp/venv-U8pL/lib/python3.11/site-packages/tox/__init__.py 13:22:21 + PARALLEL=True 13:22:21 + TOX_OPTIONS_LIST= 13:22:21 + [[ -n '' ]] 13:22:21 + case ${PARALLEL,,} in 13:22:21 + TOX_OPTIONS_LIST=' --parallel auto --parallel-live' 13:22:21 + tox --parallel auto --parallel-live 13:22:21 + tee -a /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tox.log 13:22:22 docs: install_deps> python -I -m pip install -r docs/requirements.txt 13:22:22 checkbashisms: freeze> python -m pip freeze --all 13:22:22 docs-linkcheck: install_deps> python -I -m pip install -r docs/requirements.txt 13:22:22 buildcontroller: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 13:22:23 checkbashisms: pip==26.0.1,setuptools==82.0.0 13:22:23 checkbashisms: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./fixCIcentOS8reposMirrors.sh 13:22:23 checkbashisms: commands[1] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sh -c 'command checkbashisms>/dev/null || sudo yum install -y devscripts-checkbashisms || sudo yum install -y devscripts-minimal || sudo yum install -y devscripts || sudo yum install -y https://archives.fedoraproject.org/pub/archive/fedora/linux/releases/31/Everything/x86_64/os/Packages/d/devscripts-checkbashisms-2.19.6-2.fc31.x86_64.rpm || (echo "checkbashisms command not found - please install it (e.g. sudo apt-get install devscripts | yum install devscripts-minimal )" >&2 && exit 1)' 13:22:23 checkbashisms: commands[2] /w/workspace/transportpce-tox-verify-transportpce-master/tests> find . -not -path '*/\.*' -name '*.sh' -exec checkbashisms -f '{}' + 13:22:24 checkbashisms: OK ✔ in 2.99 seconds 13:22:24 pre-commit: install_deps> python -I -m pip install pre-commit 13:22:26 pre-commit: freeze> python -m pip freeze --all 13:22:27 pre-commit: cfgv==3.5.0,distlib==0.4.0,filelock==3.24.3,identify==2.6.16,nodeenv==1.10.0,pip==26.0.1,platformdirs==4.9.2,pre_commit==4.5.1,python-discovery==1.1.0,PyYAML==6.0.3,setuptools==82.0.0,virtualenv==21.1.0 13:22:27 pre-commit: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./fixCIcentOS8reposMirrors.sh 13:22:27 pre-commit: commands[1] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sh -c 'which cpan || sudo yum install -y perl-CPAN || (echo "cpan command not found - please install it (e.g. sudo apt-get install perl-modules | yum install perl-CPAN )" >&2 && exit 1)' 13:22:27 /usr/bin/cpan 13:22:27 pre-commit: commands[2] /w/workspace/transportpce-tox-verify-transportpce-master/tests> pre-commit run --all-files --show-diff-on-failure 13:22:27 [WARNING] hook id `remove-tabs` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 13:22:27 [WARNING] hook id `perltidy` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 13:22:27 [INFO] Initializing environment for https://github.com/pre-commit/pre-commit-hooks. 13:22:27 [WARNING] repo `https://github.com/pre-commit/pre-commit-hooks` uses deprecated stage names (commit, push) which will be removed in a future version. Hint: often `pre-commit autoupdate --repo https://github.com/pre-commit/pre-commit-hooks` will fix this. if it does not -- consider reporting an issue to that repo. 13:22:27 [INFO] Initializing environment for https://github.com/jorisroovers/gitlint. 13:22:28 [INFO] Initializing environment for https://github.com/jorisroovers/gitlint:./gitlint-core[trusted-deps]. 13:22:28 [INFO] Initializing environment for https://github.com/Lucas-C/pre-commit-hooks. 13:22:28 buildcontroller: freeze> python -m pip freeze --all 13:22:28 [INFO] Initializing environment for https://github.com/pre-commit/mirrors-autopep8. 13:22:28 buildcontroller: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 13:22:28 buildcontroller: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_controller.sh 13:22:28 + update-java-alternatives -l 13:22:28 java-1.11.0-openjdk-amd64 1111 /usr/lib/jvm/java-1.11.0-openjdk-amd64 13:22:28 java-1.17.0-openjdk-amd64 1711 /usr/lib/jvm/java-1.17.0-openjdk-amd64 13:22:28 java-1.21.0-openjdk-amd64 2111 /usr/lib/jvm/java-1.21.0-openjdk-amd64 13:22:28 + sudo update-java-alternatives -s java-1.21.0-openjdk-amd64 13:22:28 [INFO] Initializing environment for https://github.com/perltidy/perltidy. 13:22:29 update-alternatives: error: no alternatives for jaotc 13:22:29 update-alternatives: error: no alternatives for rmic 13:22:29 + java -version 13:22:29 + sed -n ;s/.* version "\(.*\)\.\(.*\)\..*".*$/\1/p; 13:22:29 + JAVA_VER=21 13:22:29 + echo 21 13:22:29 21 13:22:29 + javac -version 13:22:29 + sed -n ;s/javac \(.*\)\.\(.*\)\..*.*$/\1/p; 13:22:29 + JAVAC_VER=21 13:22:29 + echo 21 13:22:29 21 13:22:29 ok, java is 21 or newer 13:22:29 + [ 21 -ge 21 ] 13:22:29 + [ 21 -ge 21 ] 13:22:29 + echo ok, java is 21 or newer 13:22:29 + wget -nv https://dlcdn.apache.org/maven/maven-3/3.9.12/binaries/apache-maven-3.9.12-bin.tar.gz -P /tmp 13:22:29 2026-02-27 13:22:29 URL:https://dlcdn.apache.org/maven/maven-3/3.9.12/binaries/apache-maven-3.9.12-bin.tar.gz [9233336/9233336] -> "/tmp/apache-maven-3.9.12-bin.tar.gz" [1] 13:22:29 + sudo mkdir -p /opt 13:22:29 + sudo tar xf /tmp/apache-maven-3.9.12-bin.tar.gz -C /opt 13:22:29 [INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks. 13:22:29 [INFO] Once installed this environment will be reused. 13:22:29 [INFO] This may take a few minutes... 13:22:29 + sudo ln -s /opt/apache-maven-3.9.12 /opt/maven 13:22:29 + sudo ln -s /opt/maven/bin/mvn /usr/bin/mvn 13:22:29 + mvn --version 13:22:30 Apache Maven 3.9.12 (848fbb4bf2d427b72bdb2471c22fced7ebd9a7a1) 13:22:30 Maven home: /opt/maven 13:22:30 Java version: 21.0.9, vendor: Ubuntu, runtime: /usr/lib/jvm/java-21-openjdk-amd64 13:22:30 Default locale: en, platform encoding: UTF-8 13:22:30 OS name: "linux", version: "5.15.0-168-generic", arch: "amd64", family: "unix" 13:22:30 NOTE: Picked up JDK_JAVA_OPTIONS: 13:22:30 --add-opens=java.base/java.io=ALL-UNNAMED 13:22:30 --add-opens=java.base/java.lang=ALL-UNNAMED 13:22:30 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 13:22:30 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 13:22:30 --add-opens=java.base/java.net=ALL-UNNAMED 13:22:30 --add-opens=java.base/java.nio=ALL-UNNAMED 13:22:30 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 13:22:30 --add-opens=java.base/java.nio.file=ALL-UNNAMED 13:22:30 --add-opens=java.base/java.util=ALL-UNNAMED 13:22:30 --add-opens=java.base/java.util.jar=ALL-UNNAMED 13:22:30 --add-opens=java.base/java.util.stream=ALL-UNNAMED 13:22:30 --add-opens=java.base/java.util.zip=ALL-UNNAMED 13:22:30 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 13:22:30 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 13:22:30 -Xlog:disable 13:22:34 [INFO] Installing environment for https://github.com/Lucas-C/pre-commit-hooks. 13:22:34 [INFO] Once installed this environment will be reused. 13:22:34 [INFO] This may take a few minutes... 13:22:42 [INFO] Installing environment for https://github.com/pre-commit/mirrors-autopep8. 13:22:42 [INFO] Once installed this environment will be reused. 13:22:42 [INFO] This may take a few minutes... 13:22:47 [INFO] Installing environment for https://github.com/perltidy/perltidy. 13:22:47 [INFO] Once installed this environment will be reused. 13:22:47 [INFO] This may take a few minutes... 13:22:48 docs-linkcheck: freeze> python -m pip freeze --all 13:22:48 docs: freeze> python -m pip freeze --all 13:22:48 docs-linkcheck: alabaster==1.0.0,attrs==25.4.0,babel==2.18.0,blockdiag==3.0.0,certifi==2026.2.25,charset-normalizer==3.4.4,contourpy==1.3.3,cycler==0.12.1,docutils==0.21.2,fonttools==4.61.1,funcparserlib==2.0.0a0,future==1.0.0,idna==3.11,imagesize==1.4.1,Jinja2==3.1.6,jsonschema==3.2.0,kiwisolver==1.4.9,lfdocs_conf==0.10.0,MarkupSafe==3.0.3,matplotlib==3.10.8,numpy==2.4.2,nwdiag==3.0.0,packaging==26.0,pillow==12.1.1,pip==26.0.1,Pygments==2.19.2,pyparsing==3.3.2,pyrsistent==0.20.0,python-dateutil==2.9.0.post0,PyYAML==6.0.3,requests==2.32.5,requests-file==1.5.1,roman-numerals==4.1.0,roman-numerals-py==4.1.0,seqdiag==3.0.0,setuptools==82.0.0,six==1.17.0,snowballstemmer==3.0.1,Sphinx==8.2.3,sphinx-bootstrap-theme==0.8.1,sphinx-data-viewer==0.1.5,sphinx-tabs==3.4.7,sphinx_rtd_theme==3.1.0,sphinxcontrib-applehelp==2.0.0,sphinxcontrib-blockdiag==3.0.0,sphinxcontrib-devhelp==2.0.0,sphinxcontrib-htmlhelp==2.1.0,sphinxcontrib-jquery==4.1,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-needs==0.7.9,sphinxcontrib-nwdiag==2.0.0,sphinxcontrib-plantuml==0.31,sphinxcontrib-qthelp==2.0.0,sphinxcontrib-seqdiag==3.0.0,sphinxcontrib-serializinghtml==2.0.0,sphinxcontrib-swaggerdoc==0.1.7,urllib3==2.6.3,webcolors==25.10.0 13:22:48 docs-linkcheck: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sphinx-build -q -b linkcheck -d /w/workspace/transportpce-tox-verify-transportpce-master/.tox/docs-linkcheck/tmp/doctrees ../docs/ /w/workspace/transportpce-tox-verify-transportpce-master/docs/_build/linkcheck 13:22:49 docs: alabaster==1.0.0,attrs==25.4.0,babel==2.18.0,blockdiag==3.0.0,certifi==2026.2.25,charset-normalizer==3.4.4,contourpy==1.3.3,cycler==0.12.1,docutils==0.21.2,fonttools==4.61.1,funcparserlib==2.0.0a0,future==1.0.0,idna==3.11,imagesize==1.4.1,Jinja2==3.1.6,jsonschema==3.2.0,kiwisolver==1.4.9,lfdocs_conf==0.10.0,MarkupSafe==3.0.3,matplotlib==3.10.8,numpy==2.4.2,nwdiag==3.0.0,packaging==26.0,pillow==12.1.1,pip==26.0.1,Pygments==2.19.2,pyparsing==3.3.2,pyrsistent==0.20.0,python-dateutil==2.9.0.post0,PyYAML==6.0.3,requests==2.32.5,requests-file==1.5.1,roman-numerals==4.1.0,roman-numerals-py==4.1.0,seqdiag==3.0.0,setuptools==82.0.0,six==1.17.0,snowballstemmer==3.0.1,Sphinx==8.2.3,sphinx-bootstrap-theme==0.8.1,sphinx-data-viewer==0.1.5,sphinx-tabs==3.4.7,sphinx_rtd_theme==3.1.0,sphinxcontrib-applehelp==2.0.0,sphinxcontrib-blockdiag==3.0.0,sphinxcontrib-devhelp==2.0.0,sphinxcontrib-htmlhelp==2.1.0,sphinxcontrib-jquery==4.1,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-needs==0.7.9,sphinxcontrib-nwdiag==2.0.0,sphinxcontrib-plantuml==0.31,sphinxcontrib-qthelp==2.0.0,sphinxcontrib-seqdiag==3.0.0,sphinxcontrib-serializinghtml==2.0.0,sphinxcontrib-swaggerdoc==0.1.7,urllib3==2.6.3,webcolors==25.10.0 13:22:49 docs: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sphinx-build -q -W --keep-going -b html -n -d /w/workspace/transportpce-tox-verify-transportpce-master/.tox/docs/tmp/doctrees ../docs/ /w/workspace/transportpce-tox-verify-transportpce-master/docs/_build/html 13:22:52 docs: OK ✔ in 30.97 seconds 13:22:52 pylint: install_deps> python -I -m pip install 'pylint>=2.6.0' 13:22:56 docs-linkcheck: OK ✔ in 32.72 seconds 13:22:56 pylint: freeze> python -m pip freeze --all 13:22:56 pylint: astroid==4.0.4,dill==0.4.1,isort==8.0.0,mccabe==0.7.0,pip==26.0.1,platformdirs==4.9.2,pylint==4.0.5,setuptools==82.0.0,tomlkit==0.14.0 13:22:56 pylint: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> find transportpce_tests/ -name '*.py' -exec pylint --fail-under=10 --max-line-length=120 --disable=missing-docstring,import-error --disable=fixme --disable=duplicate-code '--module-rgx=([a-z0-9_]+$)|([0-9.]{1,30}$)' '--method-rgx=(([a-z_][a-zA-Z0-9_]{2,})|(_[a-z0-9_]*)|(__[a-zA-Z][a-zA-Z0-9_]+__))$' '--variable-rgx=[a-zA-Z_][a-zA-Z0-9_]{1,30}$' '{}' + 13:22:59 trim trailing whitespace.................................................Passed 13:22:59 Tabs remover.............................................................Passed 13:23:00 autopep8.................................................................Passed 13:23:06 perltidy.................................................................Passed 13:23:06 pre-commit: commands[3] /w/workspace/transportpce-tox-verify-transportpce-master/tests> pre-commit run gitlint-ci --hook-stage manual 13:23:07 [WARNING] hook id `remove-tabs` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 13:23:07 [WARNING] hook id `perltidy` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 13:23:07 [INFO] Installing environment for https://github.com/jorisroovers/gitlint. 13:23:07 [INFO] Once installed this environment will be reused. 13:23:07 [INFO] This may take a few minutes... 13:23:14 gitlint..................................................................Passed 13:23:21 13:23:21 ------------------------------------ 13:23:21 Your code has been rated at 10.00/10 13:23:21 13:24:05 pre-commit: OK ✔ in 49.99 seconds 13:24:05 pylint: OK ✔ in 30.73 seconds 13:24:05 buildcontroller: OK ✔ in 1 minute 43.18 seconds 13:24:05 build_karaf_tests200: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 13:24:05 build_karaf_tests121: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 13:24:05 build_karaf_tests71: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 13:24:05 build_karaf_tests221: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 13:24:12 build_karaf_tests121: freeze> python -m pip freeze --all 13:24:12 build_karaf_tests71: freeze> python -m pip freeze --all 13:24:12 build_karaf_tests221: freeze> python -m pip freeze --all 13:24:12 build_karaf_tests200: freeze> python -m pip freeze --all 13:24:12 build_karaf_tests121: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 13:24:12 build_karaf_tests121: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 13:24:12 build karaf in karaf121 with ./karaf121.env 13:24:12 build_karaf_tests71: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 13:24:12 build_karaf_tests71: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 13:24:12 build_karaf_tests221: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 13:24:12 build_karaf_tests221: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 13:24:12 build karaf in karaf71 with ./karaf71.env 13:24:12 build karaf in karaf221 with ./karaf221.env 13:24:12 NOTE: Picked up JDK_JAVA_OPTIONS: 13:24:12 --add-opens=java.base/java.io=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.lang=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.net=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.nio=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.nio.file=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.util=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.util.jar=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.util.stream=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.util.zip=ALL-UNNAMED 13:24:12 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 13:24:12 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 13:24:12 -Xlog:disable 13:24:12 build_karaf_tests200: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 13:24:12 build_karaf_tests200: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 13:24:12 build karaf in karafoc200 with ./karafoc200.env 13:24:12 NOTE: Picked up JDK_JAVA_OPTIONS: 13:24:12 --add-opens=java.base/java.io=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.lang=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.net=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.nio=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.nio.file=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.util=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.util.jar=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.util.stream=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.util.zip=ALL-UNNAMED 13:24:12 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 13:24:12 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 13:24:12 -Xlog:disable 13:24:12 NOTE: Picked up JDK_JAVA_OPTIONS: 13:24:12 --add-opens=java.base/java.io=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.lang=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.net=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.nio=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.nio.file=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.util=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.util.jar=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.util.stream=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.util.zip=ALL-UNNAMED 13:24:12 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 13:24:12 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 13:24:12 -Xlog:disable 13:24:12 NOTE: Picked up JDK_JAVA_OPTIONS: 13:24:12 --add-opens=java.base/java.io=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.lang=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.net=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.nio=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.nio.file=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.util=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.util.jar=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.util.stream=ALL-UNNAMED 13:24:12 --add-opens=java.base/java.util.zip=ALL-UNNAMED 13:24:12 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 13:24:12 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 13:24:12 -Xlog:disable 13:25:07 build_karaf_tests221: OK ✔ in 1 minute 2.69 seconds 13:25:07 build_karaf_tests200: OK ✔ in 1 minute 2.71 seconds 13:25:07 buildlighty: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 13:25:07 sims: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 13:25:08 build_karaf_tests71: OK ✔ in 1 minute 3.72 seconds 13:25:08 build_karaf_tests121: OK ✔ in 1 minute 3.73 seconds 13:25:08 testsPCE: install_deps> python -I -m pip install gnpy4tpce==2.4.7 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 13:25:14 sims: freeze> python -m pip freeze --all 13:25:14 buildlighty: freeze> python -m pip freeze --all 13:25:14 sims: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 13:25:14 sims: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./install_lightynode.sh 13:25:14 Using lighynode version 22.1.0.7 13:25:14 Installing lightynode device to ./lightynode/lightynode-openroadm-device directory 13:25:14 buildlighty: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 13:25:14 buildlighty: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/lighty> ./build.sh 13:25:14 NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED 13:25:25 sims: OK ✔ in 18.05 seconds 13:25:25 tests71: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 13:25:32 tests71: freeze> python -m pip freeze --all 13:25:32 tests71: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 13:25:32 tests71: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 7.1 13:25:32 using environment variables from ./karaf71.env 13:25:32 pytest -q transportpce_tests/7.1/test01_portmapping.py 13:26:05 buildlighty: OK ✔ in 40.06 seconds 13:26:05 testsPCE: freeze> python -m pip freeze --all 13:26:05 testsPCE: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,click==8.3.1,contourpy==1.3.3,cryptography==3.3.2,cycler==0.12.1,dict2xml==1.7.8,Flask==2.1.3,Flask-Injector==0.14.0,fonttools==4.61.1,gnpy4tpce==2.4.7,idna==3.11,iniconfig==2.3.0,injector==0.24.0,invoke==2.2.1,itsdangerous==2.2.0,Jinja2==3.1.6,kiwisolver==1.4.9,lxml==6.0.2,MarkupSafe==3.0.3,matplotlib==3.10.8,netconf-client==3.5.0,networkx==2.8.8,numpy==1.26.4,packaging==26.0,pandas==1.5.3,paramiko==4.0.0,pbr==5.11.1,pillow==12.1.1,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pyparsing==3.3.2,pytest==9.0.2,python-dateutil==2.9.0.post0,pytz==2025.2,requests==2.32.5,scipy==1.17.1,setuptools==50.3.2,six==1.17.0,urllib3==2.6.3,Werkzeug==2.0.3,xlrd==1.2.0 13:26:05 testsPCE: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh pce 13:26:05 pytest -q transportpce_tests/pce/test01_pce.py 13:26:13 ............ [100%] 13:26:26 12 passed in 53.61s 13:26:26 pytest -q transportpce_tests/7.1/test02_otn_renderer.py 13:27:03 .................................................. [100%] 13:28:07 20 passed in 121.57s (0:02:01) 13:28:07 pytest -q transportpce_tests/pce/test02_pce_400G.py 13:28:08 .................................. [100%] 13:28:53 12 passed in 46.26s 13:28:53 pytest -q transportpce_tests/pce/test03_gnpy.py 13:28:54 ............. [100%] 13:29:14 62 passed in 167.68s (0:02:47) 13:29:14 pytest -q transportpce_tests/7.1/test03_renderer_or_modes.py 13:29:16 ..... [100%] 13:29:33 8 passed in 38.80s 13:29:33 pytest -q transportpce_tests/pce/test04_pce_bug_fix.py 13:29:50 ............. [100%] 13:30:13 3 passed in 39.67s 13:30:13 testsPCE: OK ✔ in 5 minutes 4.72 seconds 13:30:13 tests_tapi: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 13:30:13 tests121: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 13:30:13 tests200: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 13:30:14 ...tests_tapi: freeze> python -m pip freeze --all 13:30:20 tests200: freeze> python -m pip freeze --all 13:30:20 .tests121: freeze> python -m pip freeze --all 13:30:21 tests_tapi: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 13:30:21 tests_tapi: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh tapi 13:30:21 using environment variables from ./karaf221.env 13:30:21 pytest -q transportpce_tests/tapi/test01_abstracted_topology.py 13:30:21 tests200: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 13:30:21 tests200: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh oc200 13:30:21 using environment variables from ./karafoc200.env 13:30:21 pytest -q transportpce_tests/oc200/test01_portmapping.py 13:30:21 tests121: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 13:30:21 tests121: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 1.2.1 13:30:21 using environment variables from ./karaf121.env 13:30:21 pytest -q transportpce_tests/1.2.1/test01_portmapping.py 13:30:22 ....................................... [100%] 13:31:37 48 passed in 142.96s (0:02:22) 13:31:38 pytest -q transportpce_tests/7.1/test04_renderer_regen_mode.py 13:31:38 ..... [100%] 13:31:44 10 passed in 83.47s (0:01:23) 13:31:45 pytest -q transportpce_tests/oc200/test02_topology.py 13:32:25 ................................................................ [100%] 13:33:09 14 passed in 84.07s (0:01:24) 13:33:09 pytest -q transportpce_tests/oc200/test03_renderer.py 13:33:09 .. [100%] 13:33:13 22 passed in 95.37s (0:01:35) 13:33:23 ....................... [100%] 13:33:58 16 passed in 48.09s 13:34:07 ...........F.FFFFFFFFFFFFFFFFFFF [100%] 13:34:55 =================================== FAILURES =================================== 13:34:55 ___________ TestTransportPCEPortmapping.test_02_rdm_device_connected ___________ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 > sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 13:34:55 raise err 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 address = ('localhost', 8191), timeout = 30, source_address = None 13:34:55 socket_options = [(6, 1, 1)] 13:34:55 13:34:55 def create_connection( 13:34:55 address: tuple[str, int], 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 source_address: tuple[str, int] | None = None, 13:34:55 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 13:34:55 ) -> socket.socket: 13:34:55 """Connect to *address* and return the socket object. 13:34:55 13:34:55 Convenience function. Connect to *address* (a 2-tuple ``(host, 13:34:55 port)``) and return the socket object. Passing the optional 13:34:55 *timeout* parameter will set the timeout on the socket instance 13:34:55 before attempting to connect. If no *timeout* is supplied, the 13:34:55 global default timeout setting returned by :func:`socket.getdefaulttimeout` 13:34:55 is used. If *source_address* is set it must be a tuple of (host, port) 13:34:55 for the socket to bind as a source address before making the connection. 13:34:55 An host of '' or port 0 tells the OS to use the default. 13:34:55 """ 13:34:55 13:34:55 host, port = address 13:34:55 if host.startswith("["): 13:34:55 host = host.strip("[]") 13:34:55 err = None 13:34:55 13:34:55 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 13:34:55 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 13:34:55 # The original create_connection function always returns all records. 13:34:55 family = allowed_gai_family() 13:34:55 13:34:55 try: 13:34:55 host.encode("idna") 13:34:55 except UnicodeError: 13:34:55 raise LocationParseError(f"'{host}', label empty or too long") from None 13:34:55 13:34:55 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 13:34:55 af, socktype, proto, canonname, sa = res 13:34:55 sock = None 13:34:55 try: 13:34:55 sock = socket.socket(af, socktype, proto) 13:34:55 13:34:55 # If provided, set socket level options before connecting. 13:34:55 _set_socket_options(sock, socket_options) 13:34:55 13:34:55 if timeout is not _DEFAULT_TIMEOUT: 13:34:55 sock.settimeout(timeout) 13:34:55 if source_address: 13:34:55 sock.bind(source_address) 13:34:55 > sock.connect(sa) 13:34:55 E ConnectionRefusedError: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig' 13:34:55 body = None 13:34:55 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 13:34:55 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 redirect = False, assert_same_host = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 13:34:55 release_conn = False, chunked = False, body_pos = None, preload_content = False 13:34:55 decode_content = False, response_kw = {} 13:34:55 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01', query='content=nonconfig', fragment=None) 13:34:55 destination_scheme = None, conn = None, release_this_conn = True 13:34:55 http_tunnel_required = False, err = None, clean_exit = False 13:34:55 13:34:55 def urlopen( # type: ignore[override] 13:34:55 self, 13:34:55 method: str, 13:34:55 url: str, 13:34:55 body: _TYPE_BODY | None = None, 13:34:55 headers: typing.Mapping[str, str] | None = None, 13:34:55 retries: Retry | bool | int | None = None, 13:34:55 redirect: bool = True, 13:34:55 assert_same_host: bool = True, 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 pool_timeout: int | None = None, 13:34:55 release_conn: bool | None = None, 13:34:55 chunked: bool = False, 13:34:55 body_pos: _TYPE_BODY_POSITION | None = None, 13:34:55 preload_content: bool = True, 13:34:55 decode_content: bool = True, 13:34:55 **response_kw: typing.Any, 13:34:55 ) -> BaseHTTPResponse: 13:34:55 """ 13:34:55 Get a connection from the pool and perform an HTTP request. This is the 13:34:55 lowest level call for making a request, so you'll need to specify all 13:34:55 the raw details. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 More commonly, it's appropriate to use a convenience method 13:34:55 such as :meth:`request`. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 `release_conn` will only behave as expected if 13:34:55 `preload_content=False` because we want to make 13:34:55 `preload_content=False` the default behaviour someday soon without 13:34:55 breaking backwards compatibility. 13:34:55 13:34:55 :param method: 13:34:55 HTTP request method (such as GET, POST, PUT, etc.) 13:34:55 13:34:55 :param url: 13:34:55 The URL to perform the request on. 13:34:55 13:34:55 :param body: 13:34:55 Data to send in the request body, either :class:`str`, :class:`bytes`, 13:34:55 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 13:34:55 13:34:55 :param headers: 13:34:55 Dictionary of custom headers to send, such as User-Agent, 13:34:55 If-None-Match, etc. If None, pool headers are used. If provided, 13:34:55 these headers completely replace any pool-specific headers. 13:34:55 13:34:55 :param retries: 13:34:55 Configure the number of retries to allow before raising a 13:34:55 :class:`~urllib3.exceptions.MaxRetryError` exception. 13:34:55 13:34:55 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 13:34:55 :class:`~urllib3.util.retry.Retry` object for fine-grained control 13:34:55 over different types of retries. 13:34:55 Pass an integer number to retry connection errors that many times, 13:34:55 but no other types of errors. Pass zero to never retry. 13:34:55 13:34:55 If ``False``, then retries are disabled and any exception is raised 13:34:55 immediately. Also, instead of raising a MaxRetryError on redirects, 13:34:55 the redirect response will be returned. 13:34:55 13:34:55 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 13:34:55 13:34:55 :param redirect: 13:34:55 If True, automatically handle redirects (status codes 301, 302, 13:34:55 303, 307, 308). Each redirect counts as a retry. Disabling retries 13:34:55 will disable redirect, too. 13:34:55 13:34:55 :param assert_same_host: 13:34:55 If ``True``, will make sure that the host of the pool requests is 13:34:55 consistent else will raise HostChangedError. When ``False``, you can 13:34:55 use the pool on an HTTP proxy and request foreign hosts. 13:34:55 13:34:55 :param timeout: 13:34:55 If specified, overrides the default timeout for this one 13:34:55 request. It may be a float (in seconds) or an instance of 13:34:55 :class:`urllib3.util.Timeout`. 13:34:55 13:34:55 :param pool_timeout: 13:34:55 If set and the pool is set to block=True, then this method will 13:34:55 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 13:34:55 connection is available within the time period. 13:34:55 13:34:55 :param bool preload_content: 13:34:55 If True, the response's body will be preloaded into memory. 13:34:55 13:34:55 :param bool decode_content: 13:34:55 If True, will attempt to decode the body based on the 13:34:55 'content-encoding' header. 13:34:55 13:34:55 :param release_conn: 13:34:55 If False, then the urlopen call will not release the connection 13:34:55 back into the pool once a response is received (but will release if 13:34:55 you read the entire contents of the response such as when 13:34:55 `preload_content=True`). This is useful if you're not preloading 13:34:55 the response's content immediately. You will need to call 13:34:55 ``r.release_conn()`` on the response ``r`` to return the connection 13:34:55 back into the pool. If None, it takes the value of ``preload_content`` 13:34:55 which defaults to ``True``. 13:34:55 13:34:55 :param bool chunked: 13:34:55 If True, urllib3 will send the body using chunked transfer 13:34:55 encoding. Otherwise, urllib3 will send the body using the standard 13:34:55 content-length form. Defaults to False. 13:34:55 13:34:55 :param int body_pos: 13:34:55 Position to seek to in file-like body in the event of a retry or 13:34:55 redirect. Typically this won't need to be set because urllib3 will 13:34:55 auto-populate the value when needed. 13:34:55 """ 13:34:55 parsed_url = parse_url(url) 13:34:55 destination_scheme = parsed_url.scheme 13:34:55 13:34:55 if headers is None: 13:34:55 headers = self.headers 13:34:55 13:34:55 if not isinstance(retries, Retry): 13:34:55 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 13:34:55 13:34:55 if release_conn is None: 13:34:55 release_conn = preload_content 13:34:55 13:34:55 # Check host 13:34:55 if assert_same_host and not self.is_same_host(url): 13:34:55 raise HostChangedError(self, url, retries) 13:34:55 13:34:55 # Ensure that the URL we're connecting to is properly encoded 13:34:55 if url.startswith("/"): 13:34:55 url = to_str(_encode_target(url)) 13:34:55 else: 13:34:55 url = to_str(parsed_url.url) 13:34:55 13:34:55 conn = None 13:34:55 13:34:55 # Track whether `conn` needs to be released before 13:34:55 # returning/raising/recursing. Update this variable if necessary, and 13:34:55 # leave `release_conn` constant throughout the function. That way, if 13:34:55 # the function recurses, the original value of `release_conn` will be 13:34:55 # passed down into the recursive call, and its value will be respected. 13:34:55 # 13:34:55 # See issue #651 [1] for details. 13:34:55 # 13:34:55 # [1] 13:34:55 release_this_conn = release_conn 13:34:55 13:34:55 http_tunnel_required = connection_requires_http_tunnel( 13:34:55 self.proxy, self.proxy_config, destination_scheme 13:34:55 ) 13:34:55 13:34:55 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 13:34:55 # have to copy the headers dict so we can safely change it without those 13:34:55 # changes being reflected in anyone else's copy. 13:34:55 if not http_tunnel_required: 13:34:55 headers = headers.copy() # type: ignore[attr-defined] 13:34:55 headers.update(self.proxy_headers) # type: ignore[union-attr] 13:34:55 13:34:55 # Must keep the exception bound to a separate variable or else Python 3 13:34:55 # complains about UnboundLocalError. 13:34:55 err = None 13:34:55 13:34:55 # Keep track of whether we cleanly exited the except block. This 13:34:55 # ensures we do proper cleanup in finally. 13:34:55 clean_exit = False 13:34:55 13:34:55 # Rewind body position, if needed. Record current position 13:34:55 # for future rewinds in the event of a redirect/retry. 13:34:55 body_pos = set_file_position(body, body_pos) 13:34:55 13:34:55 try: 13:34:55 # Request a connection from the queue. 13:34:55 timeout_obj = self._get_timeout(timeout) 13:34:55 conn = self._get_conn(timeout=pool_timeout) 13:34:55 13:34:55 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 13:34:55 13:34:55 # Is this a closed/new connection that requires CONNECT tunnelling? 13:34:55 if self.proxy is not None and http_tunnel_required and conn.is_closed: 13:34:55 try: 13:34:55 self._prepare_proxy(conn) 13:34:55 except (BaseSSLError, OSError, SocketTimeout) as e: 13:34:55 self._raise_timeout( 13:34:55 err=e, url=self.proxy.url, timeout_value=conn.timeout 13:34:55 ) 13:34:55 raise 13:34:55 13:34:55 # If we're going to release the connection in ``finally:``, then 13:34:55 # the response doesn't need to know about the connection. Otherwise 13:34:55 # it will also try to release it and we'll have a double-release 13:34:55 # mess. 13:34:55 response_conn = conn if not release_conn else None 13:34:55 13:34:55 # Make the request on the HTTPConnection object 13:34:55 > response = self._make_request( 13:34:55 conn, 13:34:55 method, 13:34:55 url, 13:34:55 timeout=timeout_obj, 13:34:55 body=body, 13:34:55 headers=headers, 13:34:55 chunked=chunked, 13:34:55 retries=retries, 13:34:55 response_conn=response_conn, 13:34:55 preload_content=preload_content, 13:34:55 decode_content=decode_content, 13:34:55 **response_kw, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 13:34:55 conn.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 13:34:55 self.endheaders() 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 13:34:55 self._send_output(message_body, encode_chunked=encode_chunked) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 13:34:55 self.send(msg) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 13:34:55 self.connect() 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 13:34:55 self.sock = self._new_conn() 13:34:55 ^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 except socket.gaierror as e: 13:34:55 raise NameResolutionError(self.host, self, e) from e 13:34:55 except SocketTimeout as e: 13:34:55 raise ConnectTimeoutError( 13:34:55 self, 13:34:55 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 13:34:55 ) from e 13:34:55 13:34:55 except OSError as e: 13:34:55 > raise NewConnectionError( 13:34:55 self, f"Failed to establish a new connection: {e}" 13:34:55 ) from e 13:34:55 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 > resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 13:34:55 retries = retries.increment( 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig' 13:34:55 response = None 13:34:55 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 13:34:55 _pool = 13:34:55 _stacktrace = 13:34:55 13:34:55 def increment( 13:34:55 self, 13:34:55 method: str | None = None, 13:34:55 url: str | None = None, 13:34:55 response: BaseHTTPResponse | None = None, 13:34:55 error: Exception | None = None, 13:34:55 _pool: ConnectionPool | None = None, 13:34:55 _stacktrace: TracebackType | None = None, 13:34:55 ) -> Self: 13:34:55 """Return a new Retry object with incremented retry counters. 13:34:55 13:34:55 :param response: A response object, or None, if the server did not 13:34:55 return a response. 13:34:55 :type response: :class:`~urllib3.response.BaseHTTPResponse` 13:34:55 :param Exception error: An error encountered during the request, or 13:34:55 None if the response was received successfully. 13:34:55 13:34:55 :return: A new ``Retry`` object. 13:34:55 """ 13:34:55 if self.total is False and error: 13:34:55 # Disabled, indicate to re-raise the error. 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 13:34:55 total = self.total 13:34:55 if total is not None: 13:34:55 total -= 1 13:34:55 13:34:55 connect = self.connect 13:34:55 read = self.read 13:34:55 redirect = self.redirect 13:34:55 status_count = self.status 13:34:55 other = self.other 13:34:55 cause = "unknown" 13:34:55 status = None 13:34:55 redirect_location = None 13:34:55 13:34:55 if error and self._is_connection_error(error): 13:34:55 # Connect retry? 13:34:55 if connect is False: 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif connect is not None: 13:34:55 connect -= 1 13:34:55 13:34:55 elif error and self._is_read_error(error): 13:34:55 # Read retry? 13:34:55 if read is False or method is None or not self._is_method_retryable(method): 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif read is not None: 13:34:55 read -= 1 13:34:55 13:34:55 elif error: 13:34:55 # Other retry? 13:34:55 if other is not None: 13:34:55 other -= 1 13:34:55 13:34:55 elif response and response.get_redirect_location(): 13:34:55 # Redirect retry? 13:34:55 if redirect is not None: 13:34:55 redirect -= 1 13:34:55 cause = "too many redirects" 13:34:55 response_redirect_location = response.get_redirect_location() 13:34:55 if response_redirect_location: 13:34:55 redirect_location = response_redirect_location 13:34:55 status = response.status 13:34:55 13:34:55 else: 13:34:55 # Incrementing because of a server error like a 500 in 13:34:55 # status_forcelist and the given method is in the allowed_methods 13:34:55 cause = ResponseError.GENERIC_ERROR 13:34:55 if response and response.status: 13:34:55 if status_count is not None: 13:34:55 status_count -= 1 13:34:55 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 13:34:55 status = response.status 13:34:55 13:34:55 history = self.history + ( 13:34:55 RequestHistory(method, url, error, status, redirect_location), 13:34:55 ) 13:34:55 13:34:55 new_retry = self.new( 13:34:55 total=total, 13:34:55 connect=connect, 13:34:55 read=read, 13:34:55 redirect=redirect, 13:34:55 status=status_count, 13:34:55 other=other, 13:34:55 history=history, 13:34:55 ) 13:34:55 13:34:55 if new_retry.is_exhausted(): 13:34:55 reason = error or ResponseError(cause) 13:34:55 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 13:34:55 13:34:55 During handling of the above exception, another exception occurred: 13:34:55 13:34:55 self = 13:34:55 13:34:55 def test_02_rdm_device_connected(self): 13:34:55 > response = test_utils.check_device_connection("ROADMA01") 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 13:34:55 transportpce_tests/1.2.1/test01_portmapping.py:54: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 transportpce_tests/common/test_utils.py:409: in check_device_connection 13:34:55 response = get_request(url[RESTCONF_VERSION].format('{}', node)) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 transportpce_tests/common/test_utils.py:117: in get_request 13:34:55 return requests.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 13:34:55 return session.request(method=method, url=url, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 13:34:55 resp = self.send(prep, **send_kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 13:34:55 r = adapter.send(request, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 except (ProtocolError, OSError) as err: 13:34:55 raise ConnectionError(err, request=request) 13:34:55 13:34:55 except MaxRetryError as e: 13:34:55 if isinstance(e.reason, ConnectTimeoutError): 13:34:55 # TODO: Remove this in 3.0.0: see #2811 13:34:55 if not isinstance(e.reason, NewConnectionError): 13:34:55 raise ConnectTimeout(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, ResponseError): 13:34:55 raise RetryError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _ProxyError): 13:34:55 raise ProxyError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _SSLError): 13:34:55 # This branch is for urllib3 v1.22 and later. 13:34:55 raise SSLError(e, request=request) 13:34:55 13:34:55 > raise ConnectionError(e, request=request) 13:34:55 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 13:34:55 ----------------------------- Captured stdout call ----------------------------- 13:34:55 execution of test_02_rdm_device_connected 13:34:55 ___________ TestTransportPCEPortmapping.test_03_rdm_portmapping_info ___________ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 > sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 13:34:55 raise err 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 address = ('localhost', 8191), timeout = 30, source_address = None 13:34:55 socket_options = [(6, 1, 1)] 13:34:55 13:34:55 def create_connection( 13:34:55 address: tuple[str, int], 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 source_address: tuple[str, int] | None = None, 13:34:55 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 13:34:55 ) -> socket.socket: 13:34:55 """Connect to *address* and return the socket object. 13:34:55 13:34:55 Convenience function. Connect to *address* (a 2-tuple ``(host, 13:34:55 port)``) and return the socket object. Passing the optional 13:34:55 *timeout* parameter will set the timeout on the socket instance 13:34:55 before attempting to connect. If no *timeout* is supplied, the 13:34:55 global default timeout setting returned by :func:`socket.getdefaulttimeout` 13:34:55 is used. If *source_address* is set it must be a tuple of (host, port) 13:34:55 for the socket to bind as a source address before making the connection. 13:34:55 An host of '' or port 0 tells the OS to use the default. 13:34:55 """ 13:34:55 13:34:55 host, port = address 13:34:55 if host.startswith("["): 13:34:55 host = host.strip("[]") 13:34:55 err = None 13:34:55 13:34:55 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 13:34:55 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 13:34:55 # The original create_connection function always returns all records. 13:34:55 family = allowed_gai_family() 13:34:55 13:34:55 try: 13:34:55 host.encode("idna") 13:34:55 except UnicodeError: 13:34:55 raise LocationParseError(f"'{host}', label empty or too long") from None 13:34:55 13:34:55 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 13:34:55 af, socktype, proto, canonname, sa = res 13:34:55 sock = None 13:34:55 try: 13:34:55 sock = socket.socket(af, socktype, proto) 13:34:55 13:34:55 # If provided, set socket level options before connecting. 13:34:55 _set_socket_options(sock, socket_options) 13:34:55 13:34:55 if timeout is not _DEFAULT_TIMEOUT: 13:34:55 sock.settimeout(timeout) 13:34:55 if source_address: 13:34:55 sock.bind(source_address) 13:34:55 > sock.connect(sa) 13:34:55 E ConnectionRefusedError: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info' 13:34:55 body = None 13:34:55 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 13:34:55 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 redirect = False, assert_same_host = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 13:34:55 release_conn = False, chunked = False, body_pos = None, preload_content = False 13:34:55 decode_content = False, response_kw = {} 13:34:55 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info', query=None, fragment=None) 13:34:55 destination_scheme = None, conn = None, release_this_conn = True 13:34:55 http_tunnel_required = False, err = None, clean_exit = False 13:34:55 13:34:55 def urlopen( # type: ignore[override] 13:34:55 self, 13:34:55 method: str, 13:34:55 url: str, 13:34:55 body: _TYPE_BODY | None = None, 13:34:55 headers: typing.Mapping[str, str] | None = None, 13:34:55 retries: Retry | bool | int | None = None, 13:34:55 redirect: bool = True, 13:34:55 assert_same_host: bool = True, 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 pool_timeout: int | None = None, 13:34:55 release_conn: bool | None = None, 13:34:55 chunked: bool = False, 13:34:55 body_pos: _TYPE_BODY_POSITION | None = None, 13:34:55 preload_content: bool = True, 13:34:55 decode_content: bool = True, 13:34:55 **response_kw: typing.Any, 13:34:55 ) -> BaseHTTPResponse: 13:34:55 """ 13:34:55 Get a connection from the pool and perform an HTTP request. This is the 13:34:55 lowest level call for making a request, so you'll need to specify all 13:34:55 the raw details. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 More commonly, it's appropriate to use a convenience method 13:34:55 such as :meth:`request`. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 `release_conn` will only behave as expected if 13:34:55 `preload_content=False` because we want to make 13:34:55 `preload_content=False` the default behaviour someday soon without 13:34:55 breaking backwards compatibility. 13:34:55 13:34:55 :param method: 13:34:55 HTTP request method (such as GET, POST, PUT, etc.) 13:34:55 13:34:55 :param url: 13:34:55 The URL to perform the request on. 13:34:55 13:34:55 :param body: 13:34:55 Data to send in the request body, either :class:`str`, :class:`bytes`, 13:34:55 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 13:34:55 13:34:55 :param headers: 13:34:55 Dictionary of custom headers to send, such as User-Agent, 13:34:55 If-None-Match, etc. If None, pool headers are used. If provided, 13:34:55 these headers completely replace any pool-specific headers. 13:34:55 13:34:55 :param retries: 13:34:55 Configure the number of retries to allow before raising a 13:34:55 :class:`~urllib3.exceptions.MaxRetryError` exception. 13:34:55 13:34:55 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 13:34:55 :class:`~urllib3.util.retry.Retry` object for fine-grained control 13:34:55 over different types of retries. 13:34:55 Pass an integer number to retry connection errors that many times, 13:34:55 but no other types of errors. Pass zero to never retry. 13:34:55 13:34:55 If ``False``, then retries are disabled and any exception is raised 13:34:55 immediately. Also, instead of raising a MaxRetryError on redirects, 13:34:55 the redirect response will be returned. 13:34:55 13:34:55 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 13:34:55 13:34:55 :param redirect: 13:34:55 If True, automatically handle redirects (status codes 301, 302, 13:34:55 303, 307, 308). Each redirect counts as a retry. Disabling retries 13:34:55 will disable redirect, too. 13:34:55 13:34:55 :param assert_same_host: 13:34:55 If ``True``, will make sure that the host of the pool requests is 13:34:55 consistent else will raise HostChangedError. When ``False``, you can 13:34:55 use the pool on an HTTP proxy and request foreign hosts. 13:34:55 13:34:55 :param timeout: 13:34:55 If specified, overrides the default timeout for this one 13:34:55 request. It may be a float (in seconds) or an instance of 13:34:55 :class:`urllib3.util.Timeout`. 13:34:55 13:34:55 :param pool_timeout: 13:34:55 If set and the pool is set to block=True, then this method will 13:34:55 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 13:34:55 connection is available within the time period. 13:34:55 13:34:55 :param bool preload_content: 13:34:55 If True, the response's body will be preloaded into memory. 13:34:55 13:34:55 :param bool decode_content: 13:34:55 If True, will attempt to decode the body based on the 13:34:55 'content-encoding' header. 13:34:55 13:34:55 :param release_conn: 13:34:55 If False, then the urlopen call will not release the connection 13:34:55 back into the pool once a response is received (but will release if 13:34:55 you read the entire contents of the response such as when 13:34:55 `preload_content=True`). This is useful if you're not preloading 13:34:55 the response's content immediately. You will need to call 13:34:55 ``r.release_conn()`` on the response ``r`` to return the connection 13:34:55 back into the pool. If None, it takes the value of ``preload_content`` 13:34:55 which defaults to ``True``. 13:34:55 13:34:55 :param bool chunked: 13:34:55 If True, urllib3 will send the body using chunked transfer 13:34:55 encoding. Otherwise, urllib3 will send the body using the standard 13:34:55 content-length form. Defaults to False. 13:34:55 13:34:55 :param int body_pos: 13:34:55 Position to seek to in file-like body in the event of a retry or 13:34:55 redirect. Typically this won't need to be set because urllib3 will 13:34:55 auto-populate the value when needed. 13:34:55 """ 13:34:55 parsed_url = parse_url(url) 13:34:55 destination_scheme = parsed_url.scheme 13:34:55 13:34:55 if headers is None: 13:34:55 headers = self.headers 13:34:55 13:34:55 if not isinstance(retries, Retry): 13:34:55 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 13:34:55 13:34:55 if release_conn is None: 13:34:55 release_conn = preload_content 13:34:55 13:34:55 # Check host 13:34:55 if assert_same_host and not self.is_same_host(url): 13:34:55 raise HostChangedError(self, url, retries) 13:34:55 13:34:55 # Ensure that the URL we're connecting to is properly encoded 13:34:55 if url.startswith("/"): 13:34:55 url = to_str(_encode_target(url)) 13:34:55 else: 13:34:55 url = to_str(parsed_url.url) 13:34:55 13:34:55 conn = None 13:34:55 13:34:55 # Track whether `conn` needs to be released before 13:34:55 # returning/raising/recursing. Update this variable if necessary, and 13:34:55 # leave `release_conn` constant throughout the function. That way, if 13:34:55 # the function recurses, the original value of `release_conn` will be 13:34:55 # passed down into the recursive call, and its value will be respected. 13:34:55 # 13:34:55 # See issue #651 [1] for details. 13:34:55 # 13:34:55 # [1] 13:34:55 release_this_conn = release_conn 13:34:55 13:34:55 http_tunnel_required = connection_requires_http_tunnel( 13:34:55 self.proxy, self.proxy_config, destination_scheme 13:34:55 ) 13:34:55 13:34:55 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 13:34:55 # have to copy the headers dict so we can safely change it without those 13:34:55 # changes being reflected in anyone else's copy. 13:34:55 if not http_tunnel_required: 13:34:55 headers = headers.copy() # type: ignore[attr-defined] 13:34:55 headers.update(self.proxy_headers) # type: ignore[union-attr] 13:34:55 13:34:55 # Must keep the exception bound to a separate variable or else Python 3 13:34:55 # complains about UnboundLocalError. 13:34:55 err = None 13:34:55 13:34:55 # Keep track of whether we cleanly exited the except block. This 13:34:55 # ensures we do proper cleanup in finally. 13:34:55 clean_exit = False 13:34:55 13:34:55 # Rewind body position, if needed. Record current position 13:34:55 # for future rewinds in the event of a redirect/retry. 13:34:55 body_pos = set_file_position(body, body_pos) 13:34:55 13:34:55 try: 13:34:55 # Request a connection from the queue. 13:34:55 timeout_obj = self._get_timeout(timeout) 13:34:55 conn = self._get_conn(timeout=pool_timeout) 13:34:55 13:34:55 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 13:34:55 13:34:55 # Is this a closed/new connection that requires CONNECT tunnelling? 13:34:55 if self.proxy is not None and http_tunnel_required and conn.is_closed: 13:34:55 try: 13:34:55 self._prepare_proxy(conn) 13:34:55 except (BaseSSLError, OSError, SocketTimeout) as e: 13:34:55 self._raise_timeout( 13:34:55 err=e, url=self.proxy.url, timeout_value=conn.timeout 13:34:55 ) 13:34:55 raise 13:34:55 13:34:55 # If we're going to release the connection in ``finally:``, then 13:34:55 # the response doesn't need to know about the connection. Otherwise 13:34:55 # it will also try to release it and we'll have a double-release 13:34:55 # mess. 13:34:55 response_conn = conn if not release_conn else None 13:34:55 13:34:55 # Make the request on the HTTPConnection object 13:34:55 > response = self._make_request( 13:34:55 conn, 13:34:55 method, 13:34:55 url, 13:34:55 timeout=timeout_obj, 13:34:55 body=body, 13:34:55 headers=headers, 13:34:55 chunked=chunked, 13:34:55 retries=retries, 13:34:55 response_conn=response_conn, 13:34:55 preload_content=preload_content, 13:34:55 decode_content=decode_content, 13:34:55 **response_kw, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 13:34:55 conn.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 13:34:55 self.endheaders() 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 13:34:55 self._send_output(message_body, encode_chunked=encode_chunked) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 13:34:55 self.send(msg) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 13:34:55 self.connect() 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 13:34:55 self.sock = self._new_conn() 13:34:55 ^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 except socket.gaierror as e: 13:34:55 raise NameResolutionError(self.host, self, e) from e 13:34:55 except SocketTimeout as e: 13:34:55 raise ConnectTimeoutError( 13:34:55 self, 13:34:55 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 13:34:55 ) from e 13:34:55 13:34:55 except OSError as e: 13:34:55 > raise NewConnectionError( 13:34:55 self, f"Failed to establish a new connection: {e}" 13:34:55 ) from e 13:34:55 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 > resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 13:34:55 retries = retries.increment( 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info' 13:34:55 response = None 13:34:55 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 13:34:55 _pool = 13:34:55 _stacktrace = 13:34:55 13:34:55 def increment( 13:34:55 self, 13:34:55 method: str | None = None, 13:34:55 url: str | None = None, 13:34:55 response: BaseHTTPResponse | None = None, 13:34:55 error: Exception | None = None, 13:34:55 _pool: ConnectionPool | None = None, 13:34:55 _stacktrace: TracebackType | None = None, 13:34:55 ) -> Self: 13:34:55 """Return a new Retry object with incremented retry counters. 13:34:55 13:34:55 :param response: A response object, or None, if the server did not 13:34:55 return a response. 13:34:55 :type response: :class:`~urllib3.response.BaseHTTPResponse` 13:34:55 :param Exception error: An error encountered during the request, or 13:34:55 None if the response was received successfully. 13:34:55 13:34:55 :return: A new ``Retry`` object. 13:34:55 """ 13:34:55 if self.total is False and error: 13:34:55 # Disabled, indicate to re-raise the error. 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 13:34:55 total = self.total 13:34:55 if total is not None: 13:34:55 total -= 1 13:34:55 13:34:55 connect = self.connect 13:34:55 read = self.read 13:34:55 redirect = self.redirect 13:34:55 status_count = self.status 13:34:55 other = self.other 13:34:55 cause = "unknown" 13:34:55 status = None 13:34:55 redirect_location = None 13:34:55 13:34:55 if error and self._is_connection_error(error): 13:34:55 # Connect retry? 13:34:55 if connect is False: 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif connect is not None: 13:34:55 connect -= 1 13:34:55 13:34:55 elif error and self._is_read_error(error): 13:34:55 # Read retry? 13:34:55 if read is False or method is None or not self._is_method_retryable(method): 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif read is not None: 13:34:55 read -= 1 13:34:55 13:34:55 elif error: 13:34:55 # Other retry? 13:34:55 if other is not None: 13:34:55 other -= 1 13:34:55 13:34:55 elif response and response.get_redirect_location(): 13:34:55 # Redirect retry? 13:34:55 if redirect is not None: 13:34:55 redirect -= 1 13:34:55 cause = "too many redirects" 13:34:55 response_redirect_location = response.get_redirect_location() 13:34:55 if response_redirect_location: 13:34:55 redirect_location = response_redirect_location 13:34:55 status = response.status 13:34:55 13:34:55 else: 13:34:55 # Incrementing because of a server error like a 500 in 13:34:55 # status_forcelist and the given method is in the allowed_methods 13:34:55 cause = ResponseError.GENERIC_ERROR 13:34:55 if response and response.status: 13:34:55 if status_count is not None: 13:34:55 status_count -= 1 13:34:55 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 13:34:55 status = response.status 13:34:55 13:34:55 history = self.history + ( 13:34:55 RequestHistory(method, url, error, status, redirect_location), 13:34:55 ) 13:34:55 13:34:55 new_retry = self.new( 13:34:55 total=total, 13:34:55 connect=connect, 13:34:55 read=read, 13:34:55 redirect=redirect, 13:34:55 status=status_count, 13:34:55 other=other, 13:34:55 history=history, 13:34:55 ) 13:34:55 13:34:55 if new_retry.is_exhausted(): 13:34:55 reason = error or ResponseError(cause) 13:34:55 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 13:34:55 13:34:55 During handling of the above exception, another exception occurred: 13:34:55 13:34:55 self = 13:34:55 13:34:55 def test_03_rdm_portmapping_info(self): 13:34:55 > response = test_utils.get_portmapping_node_attr("ROADMA01", "node-info", None) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 13:34:55 transportpce_tests/1.2.1/test01_portmapping.py:60: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 13:34:55 response = get_request(target_url) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 transportpce_tests/common/test_utils.py:117: in get_request 13:34:55 return requests.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 13:34:55 return session.request(method=method, url=url, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 13:34:55 resp = self.send(prep, **send_kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 13:34:55 r = adapter.send(request, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 except (ProtocolError, OSError) as err: 13:34:55 raise ConnectionError(err, request=request) 13:34:55 13:34:55 except MaxRetryError as e: 13:34:55 if isinstance(e.reason, ConnectTimeoutError): 13:34:55 # TODO: Remove this in 3.0.0: see #2811 13:34:55 if not isinstance(e.reason, NewConnectionError): 13:34:55 raise ConnectTimeout(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, ResponseError): 13:34:55 raise RetryError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _ProxyError): 13:34:55 raise ProxyError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _SSLError): 13:34:55 # This branch is for urllib3 v1.22 and later. 13:34:55 raise SSLError(e, request=request) 13:34:55 13:34:55 > raise ConnectionError(e, request=request) 13:34:55 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 13:34:55 ----------------------------- Captured stdout call ----------------------------- 13:34:55 execution of test_03_rdm_portmapping_info 13:34:55 ______ TestTransportPCEPortmapping.test_04_rdm_portmapping_DEG1_TTP_TXRX _______ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 > sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 13:34:55 raise err 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 address = ('localhost', 8191), timeout = 30, source_address = None 13:34:55 socket_options = [(6, 1, 1)] 13:34:55 13:34:55 def create_connection( 13:34:55 address: tuple[str, int], 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 source_address: tuple[str, int] | None = None, 13:34:55 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 13:34:55 ) -> socket.socket: 13:34:55 """Connect to *address* and return the socket object. 13:34:55 13:34:55 Convenience function. Connect to *address* (a 2-tuple ``(host, 13:34:55 port)``) and return the socket object. Passing the optional 13:34:55 *timeout* parameter will set the timeout on the socket instance 13:34:55 before attempting to connect. If no *timeout* is supplied, the 13:34:55 global default timeout setting returned by :func:`socket.getdefaulttimeout` 13:34:55 is used. If *source_address* is set it must be a tuple of (host, port) 13:34:55 for the socket to bind as a source address before making the connection. 13:34:55 An host of '' or port 0 tells the OS to use the default. 13:34:55 """ 13:34:55 13:34:55 host, port = address 13:34:55 if host.startswith("["): 13:34:55 host = host.strip("[]") 13:34:55 err = None 13:34:55 13:34:55 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 13:34:55 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 13:34:55 # The original create_connection function always returns all records. 13:34:55 family = allowed_gai_family() 13:34:55 13:34:55 try: 13:34:55 host.encode("idna") 13:34:55 except UnicodeError: 13:34:55 raise LocationParseError(f"'{host}', label empty or too long") from None 13:34:55 13:34:55 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 13:34:55 af, socktype, proto, canonname, sa = res 13:34:55 sock = None 13:34:55 try: 13:34:55 sock = socket.socket(af, socktype, proto) 13:34:55 13:34:55 # If provided, set socket level options before connecting. 13:34:55 _set_socket_options(sock, socket_options) 13:34:55 13:34:55 if timeout is not _DEFAULT_TIMEOUT: 13:34:55 sock.settimeout(timeout) 13:34:55 if source_address: 13:34:55 sock.bind(source_address) 13:34:55 > sock.connect(sa) 13:34:55 E ConnectionRefusedError: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=DEG1-TTP-TXRX' 13:34:55 body = None 13:34:55 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 13:34:55 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 redirect = False, assert_same_host = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 13:34:55 release_conn = False, chunked = False, body_pos = None, preload_content = False 13:34:55 decode_content = False, response_kw = {} 13:34:55 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=DEG1-TTP-TXRX', query=None, fragment=None) 13:34:55 destination_scheme = None, conn = None, release_this_conn = True 13:34:55 http_tunnel_required = False, err = None, clean_exit = False 13:34:55 13:34:55 def urlopen( # type: ignore[override] 13:34:55 self, 13:34:55 method: str, 13:34:55 url: str, 13:34:55 body: _TYPE_BODY | None = None, 13:34:55 headers: typing.Mapping[str, str] | None = None, 13:34:55 retries: Retry | bool | int | None = None, 13:34:55 redirect: bool = True, 13:34:55 assert_same_host: bool = True, 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 pool_timeout: int | None = None, 13:34:55 release_conn: bool | None = None, 13:34:55 chunked: bool = False, 13:34:55 body_pos: _TYPE_BODY_POSITION | None = None, 13:34:55 preload_content: bool = True, 13:34:55 decode_content: bool = True, 13:34:55 **response_kw: typing.Any, 13:34:55 ) -> BaseHTTPResponse: 13:34:55 """ 13:34:55 Get a connection from the pool and perform an HTTP request. This is the 13:34:55 lowest level call for making a request, so you'll need to specify all 13:34:55 the raw details. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 More commonly, it's appropriate to use a convenience method 13:34:55 such as :meth:`request`. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 `release_conn` will only behave as expected if 13:34:55 `preload_content=False` because we want to make 13:34:55 `preload_content=False` the default behaviour someday soon without 13:34:55 breaking backwards compatibility. 13:34:55 13:34:55 :param method: 13:34:55 HTTP request method (such as GET, POST, PUT, etc.) 13:34:55 13:34:55 :param url: 13:34:55 The URL to perform the request on. 13:34:55 13:34:55 :param body: 13:34:55 Data to send in the request body, either :class:`str`, :class:`bytes`, 13:34:55 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 13:34:55 13:34:55 :param headers: 13:34:55 Dictionary of custom headers to send, such as User-Agent, 13:34:55 If-None-Match, etc. If None, pool headers are used. If provided, 13:34:55 these headers completely replace any pool-specific headers. 13:34:55 13:34:55 :param retries: 13:34:55 Configure the number of retries to allow before raising a 13:34:55 :class:`~urllib3.exceptions.MaxRetryError` exception. 13:34:55 13:34:55 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 13:34:55 :class:`~urllib3.util.retry.Retry` object for fine-grained control 13:34:55 over different types of retries. 13:34:55 Pass an integer number to retry connection errors that many times, 13:34:55 but no other types of errors. Pass zero to never retry. 13:34:55 13:34:55 If ``False``, then retries are disabled and any exception is raised 13:34:55 immediately. Also, instead of raising a MaxRetryError on redirects, 13:34:55 the redirect response will be returned. 13:34:55 13:34:55 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 13:34:55 13:34:55 :param redirect: 13:34:55 If True, automatically handle redirects (status codes 301, 302, 13:34:55 303, 307, 308). Each redirect counts as a retry. Disabling retries 13:34:55 will disable redirect, too. 13:34:55 13:34:55 :param assert_same_host: 13:34:55 If ``True``, will make sure that the host of the pool requests is 13:34:55 consistent else will raise HostChangedError. When ``False``, you can 13:34:55 use the pool on an HTTP proxy and request foreign hosts. 13:34:55 13:34:55 :param timeout: 13:34:55 If specified, overrides the default timeout for this one 13:34:55 request. It may be a float (in seconds) or an instance of 13:34:55 :class:`urllib3.util.Timeout`. 13:34:55 13:34:55 :param pool_timeout: 13:34:55 If set and the pool is set to block=True, then this method will 13:34:55 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 13:34:55 connection is available within the time period. 13:34:55 13:34:55 :param bool preload_content: 13:34:55 If True, the response's body will be preloaded into memory. 13:34:55 13:34:55 :param bool decode_content: 13:34:55 If True, will attempt to decode the body based on the 13:34:55 'content-encoding' header. 13:34:55 13:34:55 :param release_conn: 13:34:55 If False, then the urlopen call will not release the connection 13:34:55 back into the pool once a response is received (but will release if 13:34:55 you read the entire contents of the response such as when 13:34:55 `preload_content=True`). This is useful if you're not preloading 13:34:55 the response's content immediately. You will need to call 13:34:55 ``r.release_conn()`` on the response ``r`` to return the connection 13:34:55 back into the pool. If None, it takes the value of ``preload_content`` 13:34:55 which defaults to ``True``. 13:34:55 13:34:55 :param bool chunked: 13:34:55 If True, urllib3 will send the body using chunked transfer 13:34:55 encoding. Otherwise, urllib3 will send the body using the standard 13:34:55 content-length form. Defaults to False. 13:34:55 13:34:55 :param int body_pos: 13:34:55 Position to seek to in file-like body in the event of a retry or 13:34:55 redirect. Typically this won't need to be set because urllib3 will 13:34:55 auto-populate the value when needed. 13:34:55 """ 13:34:55 parsed_url = parse_url(url) 13:34:55 destination_scheme = parsed_url.scheme 13:34:55 13:34:55 if headers is None: 13:34:55 headers = self.headers 13:34:55 13:34:55 if not isinstance(retries, Retry): 13:34:55 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 13:34:55 13:34:55 if release_conn is None: 13:34:55 release_conn = preload_content 13:34:55 13:34:55 # Check host 13:34:55 if assert_same_host and not self.is_same_host(url): 13:34:55 raise HostChangedError(self, url, retries) 13:34:55 13:34:55 # Ensure that the URL we're connecting to is properly encoded 13:34:55 if url.startswith("/"): 13:34:55 url = to_str(_encode_target(url)) 13:34:55 else: 13:34:55 url = to_str(parsed_url.url) 13:34:55 13:34:55 conn = None 13:34:55 13:34:55 # Track whether `conn` needs to be released before 13:34:55 # returning/raising/recursing. Update this variable if necessary, and 13:34:55 # leave `release_conn` constant throughout the function. That way, if 13:34:55 # the function recurses, the original value of `release_conn` will be 13:34:55 # passed down into the recursive call, and its value will be respected. 13:34:55 # 13:34:55 # See issue #651 [1] for details. 13:34:55 # 13:34:55 # [1] 13:34:55 release_this_conn = release_conn 13:34:55 13:34:55 http_tunnel_required = connection_requires_http_tunnel( 13:34:55 self.proxy, self.proxy_config, destination_scheme 13:34:55 ) 13:34:55 13:34:55 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 13:34:55 # have to copy the headers dict so we can safely change it without those 13:34:55 # changes being reflected in anyone else's copy. 13:34:55 if not http_tunnel_required: 13:34:55 headers = headers.copy() # type: ignore[attr-defined] 13:34:55 headers.update(self.proxy_headers) # type: ignore[union-attr] 13:34:55 13:34:55 # Must keep the exception bound to a separate variable or else Python 3 13:34:55 # complains about UnboundLocalError. 13:34:55 err = None 13:34:55 13:34:55 # Keep track of whether we cleanly exited the except block. This 13:34:55 # ensures we do proper cleanup in finally. 13:34:55 clean_exit = False 13:34:55 13:34:55 # Rewind body position, if needed. Record current position 13:34:55 # for future rewinds in the event of a redirect/retry. 13:34:55 body_pos = set_file_position(body, body_pos) 13:34:55 13:34:55 try: 13:34:55 # Request a connection from the queue. 13:34:55 timeout_obj = self._get_timeout(timeout) 13:34:55 conn = self._get_conn(timeout=pool_timeout) 13:34:55 13:34:55 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 13:34:55 13:34:55 # Is this a closed/new connection that requires CONNECT tunnelling? 13:34:55 if self.proxy is not None and http_tunnel_required and conn.is_closed: 13:34:55 try: 13:34:55 self._prepare_proxy(conn) 13:34:55 except (BaseSSLError, OSError, SocketTimeout) as e: 13:34:55 self._raise_timeout( 13:34:55 err=e, url=self.proxy.url, timeout_value=conn.timeout 13:34:55 ) 13:34:55 raise 13:34:55 13:34:55 # If we're going to release the connection in ``finally:``, then 13:34:55 # the response doesn't need to know about the connection. Otherwise 13:34:55 # it will also try to release it and we'll have a double-release 13:34:55 # mess. 13:34:55 response_conn = conn if not release_conn else None 13:34:55 13:34:55 # Make the request on the HTTPConnection object 13:34:55 > response = self._make_request( 13:34:55 conn, 13:34:55 method, 13:34:55 url, 13:34:55 timeout=timeout_obj, 13:34:55 body=body, 13:34:55 headers=headers, 13:34:55 chunked=chunked, 13:34:55 retries=retries, 13:34:55 response_conn=response_conn, 13:34:55 preload_content=preload_content, 13:34:55 decode_content=decode_content, 13:34:55 **response_kw, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 13:34:55 conn.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 13:34:55 self.endheaders() 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 13:34:55 self._send_output(message_body, encode_chunked=encode_chunked) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 13:34:55 self.send(msg) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 13:34:55 self.connect() 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 13:34:55 self.sock = self._new_conn() 13:34:55 ^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 except socket.gaierror as e: 13:34:55 raise NameResolutionError(self.host, self, e) from e 13:34:55 except SocketTimeout as e: 13:34:55 raise ConnectTimeoutError( 13:34:55 self, 13:34:55 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 13:34:55 ) from e 13:34:55 13:34:55 except OSError as e: 13:34:55 > raise NewConnectionError( 13:34:55 self, f"Failed to establish a new connection: {e}" 13:34:55 ) from e 13:34:55 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 > resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 13:34:55 retries = retries.increment( 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=DEG1-TTP-TXRX' 13:34:55 response = None 13:34:55 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 13:34:55 _pool = 13:34:55 _stacktrace = 13:34:55 13:34:55 def increment( 13:34:55 self, 13:34:55 method: str | None = None, 13:34:55 url: str | None = None, 13:34:55 response: BaseHTTPResponse | None = None, 13:34:55 error: Exception | None = None, 13:34:55 _pool: ConnectionPool | None = None, 13:34:55 _stacktrace: TracebackType | None = None, 13:34:55 ) -> Self: 13:34:55 """Return a new Retry object with incremented retry counters. 13:34:55 13:34:55 :param response: A response object, or None, if the server did not 13:34:55 return a response. 13:34:55 :type response: :class:`~urllib3.response.BaseHTTPResponse` 13:34:55 :param Exception error: An error encountered during the request, or 13:34:55 None if the response was received successfully. 13:34:55 13:34:55 :return: A new ``Retry`` object. 13:34:55 """ 13:34:55 if self.total is False and error: 13:34:55 # Disabled, indicate to re-raise the error. 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 13:34:55 total = self.total 13:34:55 if total is not None: 13:34:55 total -= 1 13:34:55 13:34:55 connect = self.connect 13:34:55 read = self.read 13:34:55 redirect = self.redirect 13:34:55 status_count = self.status 13:34:55 other = self.other 13:34:55 cause = "unknown" 13:34:55 status = None 13:34:55 redirect_location = None 13:34:55 13:34:55 if error and self._is_connection_error(error): 13:34:55 # Connect retry? 13:34:55 if connect is False: 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif connect is not None: 13:34:55 connect -= 1 13:34:55 13:34:55 elif error and self._is_read_error(error): 13:34:55 # Read retry? 13:34:55 if read is False or method is None or not self._is_method_retryable(method): 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif read is not None: 13:34:55 read -= 1 13:34:55 13:34:55 elif error: 13:34:55 # Other retry? 13:34:55 if other is not None: 13:34:55 other -= 1 13:34:55 13:34:55 elif response and response.get_redirect_location(): 13:34:55 # Redirect retry? 13:34:55 if redirect is not None: 13:34:55 redirect -= 1 13:34:55 cause = "too many redirects" 13:34:55 response_redirect_location = response.get_redirect_location() 13:34:55 if response_redirect_location: 13:34:55 redirect_location = response_redirect_location 13:34:55 status = response.status 13:34:55 13:34:55 else: 13:34:55 # Incrementing because of a server error like a 500 in 13:34:55 # status_forcelist and the given method is in the allowed_methods 13:34:55 cause = ResponseError.GENERIC_ERROR 13:34:55 if response and response.status: 13:34:55 if status_count is not None: 13:34:55 status_count -= 1 13:34:55 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 13:34:55 status = response.status 13:34:55 13:34:55 history = self.history + ( 13:34:55 RequestHistory(method, url, error, status, redirect_location), 13:34:55 ) 13:34:55 13:34:55 new_retry = self.new( 13:34:55 total=total, 13:34:55 connect=connect, 13:34:55 read=read, 13:34:55 redirect=redirect, 13:34:55 status=status_count, 13:34:55 other=other, 13:34:55 history=history, 13:34:55 ) 13:34:55 13:34:55 if new_retry.is_exhausted(): 13:34:55 reason = error or ResponseError(cause) 13:34:55 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=DEG1-TTP-TXRX (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 13:34:55 13:34:55 During handling of the above exception, another exception occurred: 13:34:55 13:34:55 self = 13:34:55 13:34:55 def test_04_rdm_portmapping_DEG1_TTP_TXRX(self): 13:34:55 > response = test_utils.get_portmapping_node_attr("ROADMA01", "mapping", "DEG1-TTP-TXRX") 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 13:34:55 transportpce_tests/1.2.1/test01_portmapping.py:73: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 13:34:55 response = get_request(target_url) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 transportpce_tests/common/test_utils.py:117: in get_request 13:34:55 return requests.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 13:34:55 return session.request(method=method, url=url, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 13:34:55 resp = self.send(prep, **send_kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 13:34:55 r = adapter.send(request, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 except (ProtocolError, OSError) as err: 13:34:55 raise ConnectionError(err, request=request) 13:34:55 13:34:55 except MaxRetryError as e: 13:34:55 if isinstance(e.reason, ConnectTimeoutError): 13:34:55 # TODO: Remove this in 3.0.0: see #2811 13:34:55 if not isinstance(e.reason, NewConnectionError): 13:34:55 raise ConnectTimeout(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, ResponseError): 13:34:55 raise RetryError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _ProxyError): 13:34:55 raise ProxyError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _SSLError): 13:34:55 # This branch is for urllib3 v1.22 and later. 13:34:55 raise SSLError(e, request=request) 13:34:55 13:34:55 > raise ConnectionError(e, request=request) 13:34:55 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=DEG1-TTP-TXRX (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 13:34:55 ----------------------------- Captured stdout call ----------------------------- 13:34:55 execution of test_04_rdm_portmapping_DEG1_TTP_TXRX 13:34:55 ______ TestTransportPCEPortmapping.test_05_rdm_portmapping_SRG1_PP7_TXRX _______ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 > sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 13:34:55 raise err 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 address = ('localhost', 8191), timeout = 30, source_address = None 13:34:55 socket_options = [(6, 1, 1)] 13:34:55 13:34:55 def create_connection( 13:34:55 address: tuple[str, int], 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 source_address: tuple[str, int] | None = None, 13:34:55 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 13:34:55 ) -> socket.socket: 13:34:55 """Connect to *address* and return the socket object. 13:34:55 13:34:55 Convenience function. Connect to *address* (a 2-tuple ``(host, 13:34:55 port)``) and return the socket object. Passing the optional 13:34:55 *timeout* parameter will set the timeout on the socket instance 13:34:55 before attempting to connect. If no *timeout* is supplied, the 13:34:55 global default timeout setting returned by :func:`socket.getdefaulttimeout` 13:34:55 is used. If *source_address* is set it must be a tuple of (host, port) 13:34:55 for the socket to bind as a source address before making the connection. 13:34:55 An host of '' or port 0 tells the OS to use the default. 13:34:55 """ 13:34:55 13:34:55 host, port = address 13:34:55 if host.startswith("["): 13:34:55 host = host.strip("[]") 13:34:55 err = None 13:34:55 13:34:55 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 13:34:55 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 13:34:55 # The original create_connection function always returns all records. 13:34:55 family = allowed_gai_family() 13:34:55 13:34:55 try: 13:34:55 host.encode("idna") 13:34:55 except UnicodeError: 13:34:55 raise LocationParseError(f"'{host}', label empty or too long") from None 13:34:55 13:34:55 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 13:34:55 af, socktype, proto, canonname, sa = res 13:34:55 sock = None 13:34:55 try: 13:34:55 sock = socket.socket(af, socktype, proto) 13:34:55 13:34:55 # If provided, set socket level options before connecting. 13:34:55 _set_socket_options(sock, socket_options) 13:34:55 13:34:55 if timeout is not _DEFAULT_TIMEOUT: 13:34:55 sock.settimeout(timeout) 13:34:55 if source_address: 13:34:55 sock.bind(source_address) 13:34:55 > sock.connect(sa) 13:34:55 E ConnectionRefusedError: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG1-PP7-TXRX' 13:34:55 body = None 13:34:55 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 13:34:55 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 redirect = False, assert_same_host = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 13:34:55 release_conn = False, chunked = False, body_pos = None, preload_content = False 13:34:55 decode_content = False, response_kw = {} 13:34:55 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG1-PP7-TXRX', query=None, fragment=None) 13:34:55 destination_scheme = None, conn = None, release_this_conn = True 13:34:55 http_tunnel_required = False, err = None, clean_exit = False 13:34:55 13:34:55 def urlopen( # type: ignore[override] 13:34:55 self, 13:34:55 method: str, 13:34:55 url: str, 13:34:55 body: _TYPE_BODY | None = None, 13:34:55 headers: typing.Mapping[str, str] | None = None, 13:34:55 retries: Retry | bool | int | None = None, 13:34:55 redirect: bool = True, 13:34:55 assert_same_host: bool = True, 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 pool_timeout: int | None = None, 13:34:55 release_conn: bool | None = None, 13:34:55 chunked: bool = False, 13:34:55 body_pos: _TYPE_BODY_POSITION | None = None, 13:34:55 preload_content: bool = True, 13:34:55 decode_content: bool = True, 13:34:55 **response_kw: typing.Any, 13:34:55 ) -> BaseHTTPResponse: 13:34:55 """ 13:34:55 Get a connection from the pool and perform an HTTP request. This is the 13:34:55 lowest level call for making a request, so you'll need to specify all 13:34:55 the raw details. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 More commonly, it's appropriate to use a convenience method 13:34:55 such as :meth:`request`. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 `release_conn` will only behave as expected if 13:34:55 `preload_content=False` because we want to make 13:34:55 `preload_content=False` the default behaviour someday soon without 13:34:55 breaking backwards compatibility. 13:34:55 13:34:55 :param method: 13:34:55 HTTP request method (such as GET, POST, PUT, etc.) 13:34:55 13:34:55 :param url: 13:34:55 The URL to perform the request on. 13:34:55 13:34:55 :param body: 13:34:55 Data to send in the request body, either :class:`str`, :class:`bytes`, 13:34:55 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 13:34:55 13:34:55 :param headers: 13:34:55 Dictionary of custom headers to send, such as User-Agent, 13:34:55 If-None-Match, etc. If None, pool headers are used. If provided, 13:34:55 these headers completely replace any pool-specific headers. 13:34:55 13:34:55 :param retries: 13:34:55 Configure the number of retries to allow before raising a 13:34:55 :class:`~urllib3.exceptions.MaxRetryError` exception. 13:34:55 13:34:55 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 13:34:55 :class:`~urllib3.util.retry.Retry` object for fine-grained control 13:34:55 over different types of retries. 13:34:55 Pass an integer number to retry connection errors that many times, 13:34:55 but no other types of errors. Pass zero to never retry. 13:34:55 13:34:55 If ``False``, then retries are disabled and any exception is raised 13:34:55 immediately. Also, instead of raising a MaxRetryError on redirects, 13:34:55 the redirect response will be returned. 13:34:55 13:34:55 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 13:34:55 13:34:55 :param redirect: 13:34:55 If True, automatically handle redirects (status codes 301, 302, 13:34:55 303, 307, 308). Each redirect counts as a retry. Disabling retries 13:34:55 will disable redirect, too. 13:34:55 13:34:55 :param assert_same_host: 13:34:55 If ``True``, will make sure that the host of the pool requests is 13:34:55 consistent else will raise HostChangedError. When ``False``, you can 13:34:55 use the pool on an HTTP proxy and request foreign hosts. 13:34:55 13:34:55 :param timeout: 13:34:55 If specified, overrides the default timeout for this one 13:34:55 request. It may be a float (in seconds) or an instance of 13:34:55 :class:`urllib3.util.Timeout`. 13:34:55 13:34:55 :param pool_timeout: 13:34:55 If set and the pool is set to block=True, then this method will 13:34:55 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 13:34:55 connection is available within the time period. 13:34:55 13:34:55 :param bool preload_content: 13:34:55 If True, the response's body will be preloaded into memory. 13:34:55 13:34:55 :param bool decode_content: 13:34:55 If True, will attempt to decode the body based on the 13:34:55 'content-encoding' header. 13:34:55 13:34:55 :param release_conn: 13:34:55 If False, then the urlopen call will not release the connection 13:34:55 back into the pool once a response is received (but will release if 13:34:55 you read the entire contents of the response such as when 13:34:55 `preload_content=True`). This is useful if you're not preloading 13:34:55 the response's content immediately. You will need to call 13:34:55 ``r.release_conn()`` on the response ``r`` to return the connection 13:34:55 back into the pool. If None, it takes the value of ``preload_content`` 13:34:55 which defaults to ``True``. 13:34:55 13:34:55 :param bool chunked: 13:34:55 If True, urllib3 will send the body using chunked transfer 13:34:55 encoding. Otherwise, urllib3 will send the body using the standard 13:34:55 content-length form. Defaults to False. 13:34:55 13:34:55 :param int body_pos: 13:34:55 Position to seek to in file-like body in the event of a retry or 13:34:55 redirect. Typically this won't need to be set because urllib3 will 13:34:55 auto-populate the value when needed. 13:34:55 """ 13:34:55 parsed_url = parse_url(url) 13:34:55 destination_scheme = parsed_url.scheme 13:34:55 13:34:55 if headers is None: 13:34:55 headers = self.headers 13:34:55 13:34:55 if not isinstance(retries, Retry): 13:34:55 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 13:34:55 13:34:55 if release_conn is None: 13:34:55 release_conn = preload_content 13:34:55 13:34:55 # Check host 13:34:55 if assert_same_host and not self.is_same_host(url): 13:34:55 raise HostChangedError(self, url, retries) 13:34:55 13:34:55 # Ensure that the URL we're connecting to is properly encoded 13:34:55 if url.startswith("/"): 13:34:55 url = to_str(_encode_target(url)) 13:34:55 else: 13:34:55 url = to_str(parsed_url.url) 13:34:55 13:34:55 conn = None 13:34:55 13:34:55 # Track whether `conn` needs to be released before 13:34:55 # returning/raising/recursing. Update this variable if necessary, and 13:34:55 # leave `release_conn` constant throughout the function. That way, if 13:34:55 # the function recurses, the original value of `release_conn` will be 13:34:55 # passed down into the recursive call, and its value will be respected. 13:34:55 # 13:34:55 # See issue #651 [1] for details. 13:34:55 # 13:34:55 # [1] 13:34:55 release_this_conn = release_conn 13:34:55 13:34:55 http_tunnel_required = connection_requires_http_tunnel( 13:34:55 self.proxy, self.proxy_config, destination_scheme 13:34:55 ) 13:34:55 13:34:55 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 13:34:55 # have to copy the headers dict so we can safely change it without those 13:34:55 # changes being reflected in anyone else's copy. 13:34:55 if not http_tunnel_required: 13:34:55 headers = headers.copy() # type: ignore[attr-defined] 13:34:55 headers.update(self.proxy_headers) # type: ignore[union-attr] 13:34:55 13:34:55 # Must keep the exception bound to a separate variable or else Python 3 13:34:55 # complains about UnboundLocalError. 13:34:55 err = None 13:34:55 13:34:55 # Keep track of whether we cleanly exited the except block. This 13:34:55 # ensures we do proper cleanup in finally. 13:34:55 clean_exit = False 13:34:55 13:34:55 # Rewind body position, if needed. Record current position 13:34:55 # for future rewinds in the event of a redirect/retry. 13:34:55 body_pos = set_file_position(body, body_pos) 13:34:55 13:34:55 try: 13:34:55 # Request a connection from the queue. 13:34:55 timeout_obj = self._get_timeout(timeout) 13:34:55 conn = self._get_conn(timeout=pool_timeout) 13:34:55 13:34:55 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 13:34:55 13:34:55 # Is this a closed/new connection that requires CONNECT tunnelling? 13:34:55 if self.proxy is not None and http_tunnel_required and conn.is_closed: 13:34:55 try: 13:34:55 self._prepare_proxy(conn) 13:34:55 except (BaseSSLError, OSError, SocketTimeout) as e: 13:34:55 self._raise_timeout( 13:34:55 err=e, url=self.proxy.url, timeout_value=conn.timeout 13:34:55 ) 13:34:55 raise 13:34:55 13:34:55 # If we're going to release the connection in ``finally:``, then 13:34:55 # the response doesn't need to know about the connection. Otherwise 13:34:55 # it will also try to release it and we'll have a double-release 13:34:55 # mess. 13:34:55 response_conn = conn if not release_conn else None 13:34:55 13:34:55 # Make the request on the HTTPConnection object 13:34:55 > response = self._make_request( 13:34:55 conn, 13:34:55 method, 13:34:55 url, 13:34:55 timeout=timeout_obj, 13:34:55 body=body, 13:34:55 headers=headers, 13:34:55 chunked=chunked, 13:34:55 retries=retries, 13:34:55 response_conn=response_conn, 13:34:55 preload_content=preload_content, 13:34:55 decode_content=decode_content, 13:34:55 **response_kw, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 13:34:55 conn.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 13:34:55 self.endheaders() 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 13:34:55 self._send_output(message_body, encode_chunked=encode_chunked) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 13:34:55 self.send(msg) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 13:34:55 self.connect() 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 13:34:55 self.sock = self._new_conn() 13:34:55 ^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 except socket.gaierror as e: 13:34:55 raise NameResolutionError(self.host, self, e) from e 13:34:55 except SocketTimeout as e: 13:34:55 raise ConnectTimeoutError( 13:34:55 self, 13:34:55 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 13:34:55 ) from e 13:34:55 13:34:55 except OSError as e: 13:34:55 > raise NewConnectionError( 13:34:55 self, f"Failed to establish a new connection: {e}" 13:34:55 ) from e 13:34:55 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 > resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 13:34:55 retries = retries.increment( 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG1-PP7-TXRX' 13:34:55 response = None 13:34:55 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 13:34:55 _pool = 13:34:55 _stacktrace = 13:34:55 13:34:55 def increment( 13:34:55 self, 13:34:55 method: str | None = None, 13:34:55 url: str | None = None, 13:34:55 response: BaseHTTPResponse | None = None, 13:34:55 error: Exception | None = None, 13:34:55 _pool: ConnectionPool | None = None, 13:34:55 _stacktrace: TracebackType | None = None, 13:34:55 ) -> Self: 13:34:55 """Return a new Retry object with incremented retry counters. 13:34:55 13:34:55 :param response: A response object, or None, if the server did not 13:34:55 return a response. 13:34:55 :type response: :class:`~urllib3.response.BaseHTTPResponse` 13:34:55 :param Exception error: An error encountered during the request, or 13:34:55 None if the response was received successfully. 13:34:55 13:34:55 :return: A new ``Retry`` object. 13:34:55 """ 13:34:55 if self.total is False and error: 13:34:55 # Disabled, indicate to re-raise the error. 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 13:34:55 total = self.total 13:34:55 if total is not None: 13:34:55 total -= 1 13:34:55 13:34:55 connect = self.connect 13:34:55 read = self.read 13:34:55 redirect = self.redirect 13:34:55 status_count = self.status 13:34:55 other = self.other 13:34:55 cause = "unknown" 13:34:55 status = None 13:34:55 redirect_location = None 13:34:55 13:34:55 if error and self._is_connection_error(error): 13:34:55 # Connect retry? 13:34:55 if connect is False: 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif connect is not None: 13:34:55 connect -= 1 13:34:55 13:34:55 elif error and self._is_read_error(error): 13:34:55 # Read retry? 13:34:55 if read is False or method is None or not self._is_method_retryable(method): 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif read is not None: 13:34:55 read -= 1 13:34:55 13:34:55 elif error: 13:34:55 # Other retry? 13:34:55 if other is not None: 13:34:55 other -= 1 13:34:55 13:34:55 elif response and response.get_redirect_location(): 13:34:55 # Redirect retry? 13:34:55 if redirect is not None: 13:34:55 redirect -= 1 13:34:55 cause = "too many redirects" 13:34:55 response_redirect_location = response.get_redirect_location() 13:34:55 if response_redirect_location: 13:34:55 redirect_location = response_redirect_location 13:34:55 status = response.status 13:34:55 13:34:55 else: 13:34:55 # Incrementing because of a server error like a 500 in 13:34:55 # status_forcelist and the given method is in the allowed_methods 13:34:55 cause = ResponseError.GENERIC_ERROR 13:34:55 if response and response.status: 13:34:55 if status_count is not None: 13:34:55 status_count -= 1 13:34:55 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 13:34:55 status = response.status 13:34:55 13:34:55 history = self.history + ( 13:34:55 RequestHistory(method, url, error, status, redirect_location), 13:34:55 ) 13:34:55 13:34:55 new_retry = self.new( 13:34:55 total=total, 13:34:55 connect=connect, 13:34:55 read=read, 13:34:55 redirect=redirect, 13:34:55 status=status_count, 13:34:55 other=other, 13:34:55 history=history, 13:34:55 ) 13:34:55 13:34:55 if new_retry.is_exhausted(): 13:34:55 reason = error or ResponseError(cause) 13:34:55 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG1-PP7-TXRX (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 13:34:55 13:34:55 During handling of the above exception, another exception occurred: 13:34:55 13:34:55 self = 13:34:55 13:34:55 def test_05_rdm_portmapping_SRG1_PP7_TXRX(self): 13:34:55 > response = test_utils.get_portmapping_node_attr("ROADMA01", "mapping", "SRG1-PP7-TXRX") 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 13:34:55 transportpce_tests/1.2.1/test01_portmapping.py:82: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 13:34:55 response = get_request(target_url) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 transportpce_tests/common/test_utils.py:117: in get_request 13:34:55 return requests.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 13:34:55 return session.request(method=method, url=url, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 13:34:55 resp = self.send(prep, **send_kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 13:34:55 r = adapter.send(request, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 except (ProtocolError, OSError) as err: 13:34:55 raise ConnectionError(err, request=request) 13:34:55 13:34:55 except MaxRetryError as e: 13:34:55 if isinstance(e.reason, ConnectTimeoutError): 13:34:55 # TODO: Remove this in 3.0.0: see #2811 13:34:55 if not isinstance(e.reason, NewConnectionError): 13:34:55 raise ConnectTimeout(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, ResponseError): 13:34:55 raise RetryError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _ProxyError): 13:34:55 raise ProxyError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _SSLError): 13:34:55 # This branch is for urllib3 v1.22 and later. 13:34:55 raise SSLError(e, request=request) 13:34:55 13:34:55 > raise ConnectionError(e, request=request) 13:34:55 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG1-PP7-TXRX (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 13:34:55 ----------------------------- Captured stdout call ----------------------------- 13:34:55 execution of test_05_rdm_portmapping_SRG1_PP7_TXRX 13:34:55 ______ TestTransportPCEPortmapping.test_06_rdm_portmapping_SRG3_PP1_TXRX _______ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 > sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 13:34:55 raise err 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 address = ('localhost', 8191), timeout = 30, source_address = None 13:34:55 socket_options = [(6, 1, 1)] 13:34:55 13:34:55 def create_connection( 13:34:55 address: tuple[str, int], 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 source_address: tuple[str, int] | None = None, 13:34:55 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 13:34:55 ) -> socket.socket: 13:34:55 """Connect to *address* and return the socket object. 13:34:55 13:34:55 Convenience function. Connect to *address* (a 2-tuple ``(host, 13:34:55 port)``) and return the socket object. Passing the optional 13:34:55 *timeout* parameter will set the timeout on the socket instance 13:34:55 before attempting to connect. If no *timeout* is supplied, the 13:34:55 global default timeout setting returned by :func:`socket.getdefaulttimeout` 13:34:55 is used. If *source_address* is set it must be a tuple of (host, port) 13:34:55 for the socket to bind as a source address before making the connection. 13:34:55 An host of '' or port 0 tells the OS to use the default. 13:34:55 """ 13:34:55 13:34:55 host, port = address 13:34:55 if host.startswith("["): 13:34:55 host = host.strip("[]") 13:34:55 err = None 13:34:55 13:34:55 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 13:34:55 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 13:34:55 # The original create_connection function always returns all records. 13:34:55 family = allowed_gai_family() 13:34:55 13:34:55 try: 13:34:55 host.encode("idna") 13:34:55 except UnicodeError: 13:34:55 raise LocationParseError(f"'{host}', label empty or too long") from None 13:34:55 13:34:55 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 13:34:55 af, socktype, proto, canonname, sa = res 13:34:55 sock = None 13:34:55 try: 13:34:55 sock = socket.socket(af, socktype, proto) 13:34:55 13:34:55 # If provided, set socket level options before connecting. 13:34:55 _set_socket_options(sock, socket_options) 13:34:55 13:34:55 if timeout is not _DEFAULT_TIMEOUT: 13:34:55 sock.settimeout(timeout) 13:34:55 if source_address: 13:34:55 sock.bind(source_address) 13:34:55 > sock.connect(sa) 13:34:55 E ConnectionRefusedError: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG3-PP1-TXRX' 13:34:55 body = None 13:34:55 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 13:34:55 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 redirect = False, assert_same_host = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 13:34:55 release_conn = False, chunked = False, body_pos = None, preload_content = False 13:34:55 decode_content = False, response_kw = {} 13:34:55 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG3-PP1-TXRX', query=None, fragment=None) 13:34:55 destination_scheme = None, conn = None, release_this_conn = True 13:34:55 http_tunnel_required = False, err = None, clean_exit = False 13:34:55 13:34:55 def urlopen( # type: ignore[override] 13:34:55 self, 13:34:55 method: str, 13:34:55 url: str, 13:34:55 body: _TYPE_BODY | None = None, 13:34:55 headers: typing.Mapping[str, str] | None = None, 13:34:55 retries: Retry | bool | int | None = None, 13:34:55 redirect: bool = True, 13:34:55 assert_same_host: bool = True, 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 pool_timeout: int | None = None, 13:34:55 release_conn: bool | None = None, 13:34:55 chunked: bool = False, 13:34:55 body_pos: _TYPE_BODY_POSITION | None = None, 13:34:55 preload_content: bool = True, 13:34:55 decode_content: bool = True, 13:34:55 **response_kw: typing.Any, 13:34:55 ) -> BaseHTTPResponse: 13:34:55 """ 13:34:55 Get a connection from the pool and perform an HTTP request. This is the 13:34:55 lowest level call for making a request, so you'll need to specify all 13:34:55 the raw details. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 More commonly, it's appropriate to use a convenience method 13:34:55 such as :meth:`request`. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 `release_conn` will only behave as expected if 13:34:55 `preload_content=False` because we want to make 13:34:55 `preload_content=False` the default behaviour someday soon without 13:34:55 breaking backwards compatibility. 13:34:55 13:34:55 :param method: 13:34:55 HTTP request method (such as GET, POST, PUT, etc.) 13:34:55 13:34:55 :param url: 13:34:55 The URL to perform the request on. 13:34:55 13:34:55 :param body: 13:34:55 Data to send in the request body, either :class:`str`, :class:`bytes`, 13:34:55 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 13:34:55 13:34:55 :param headers: 13:34:55 Dictionary of custom headers to send, such as User-Agent, 13:34:55 If-None-Match, etc. If None, pool headers are used. If provided, 13:34:55 these headers completely replace any pool-specific headers. 13:34:55 13:34:55 :param retries: 13:34:55 Configure the number of retries to allow before raising a 13:34:55 :class:`~urllib3.exceptions.MaxRetryError` exception. 13:34:55 13:34:55 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 13:34:55 :class:`~urllib3.util.retry.Retry` object for fine-grained control 13:34:55 over different types of retries. 13:34:55 Pass an integer number to retry connection errors that many times, 13:34:55 but no other types of errors. Pass zero to never retry. 13:34:55 13:34:55 If ``False``, then retries are disabled and any exception is raised 13:34:55 immediately. Also, instead of raising a MaxRetryError on redirects, 13:34:55 the redirect response will be returned. 13:34:55 13:34:55 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 13:34:55 13:34:55 :param redirect: 13:34:55 If True, automatically handle redirects (status codes 301, 302, 13:34:55 303, 307, 308). Each redirect counts as a retry. Disabling retries 13:34:55 will disable redirect, too. 13:34:55 13:34:55 :param assert_same_host: 13:34:55 If ``True``, will make sure that the host of the pool requests is 13:34:55 consistent else will raise HostChangedError. When ``False``, you can 13:34:55 use the pool on an HTTP proxy and request foreign hosts. 13:34:55 13:34:55 :param timeout: 13:34:55 If specified, overrides the default timeout for this one 13:34:55 request. It may be a float (in seconds) or an instance of 13:34:55 :class:`urllib3.util.Timeout`. 13:34:55 13:34:55 :param pool_timeout: 13:34:55 If set and the pool is set to block=True, then this method will 13:34:55 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 13:34:55 connection is available within the time period. 13:34:55 13:34:55 :param bool preload_content: 13:34:55 If True, the response's body will be preloaded into memory. 13:34:55 13:34:55 :param bool decode_content: 13:34:55 If True, will attempt to decode the body based on the 13:34:55 'content-encoding' header. 13:34:55 13:34:55 :param release_conn: 13:34:55 If False, then the urlopen call will not release the connection 13:34:55 back into the pool once a response is received (but will release if 13:34:55 you read the entire contents of the response such as when 13:34:55 `preload_content=True`). This is useful if you're not preloading 13:34:55 the response's content immediately. You will need to call 13:34:55 ``r.release_conn()`` on the response ``r`` to return the connection 13:34:55 back into the pool. If None, it takes the value of ``preload_content`` 13:34:55 which defaults to ``True``. 13:34:55 13:34:55 :param bool chunked: 13:34:55 If True, urllib3 will send the body using chunked transfer 13:34:55 encoding. Otherwise, urllib3 will send the body using the standard 13:34:55 content-length form. Defaults to False. 13:34:55 13:34:55 :param int body_pos: 13:34:55 Position to seek to in file-like body in the event of a retry or 13:34:55 redirect. Typically this won't need to be set because urllib3 will 13:34:55 auto-populate the value when needed. 13:34:55 """ 13:34:55 parsed_url = parse_url(url) 13:34:55 destination_scheme = parsed_url.scheme 13:34:55 13:34:55 if headers is None: 13:34:55 headers = self.headers 13:34:55 13:34:55 if not isinstance(retries, Retry): 13:34:55 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 13:34:55 13:34:55 if release_conn is None: 13:34:55 release_conn = preload_content 13:34:55 13:34:55 # Check host 13:34:55 if assert_same_host and not self.is_same_host(url): 13:34:55 raise HostChangedError(self, url, retries) 13:34:55 13:34:55 # Ensure that the URL we're connecting to is properly encoded 13:34:55 if url.startswith("/"): 13:34:55 url = to_str(_encode_target(url)) 13:34:55 else: 13:34:55 url = to_str(parsed_url.url) 13:34:55 13:34:55 conn = None 13:34:55 13:34:55 # Track whether `conn` needs to be released before 13:34:55 # returning/raising/recursing. Update this variable if necessary, and 13:34:55 # leave `release_conn` constant throughout the function. That way, if 13:34:55 # the function recurses, the original value of `release_conn` will be 13:34:55 # passed down into the recursive call, and its value will be respected. 13:34:55 # 13:34:55 # See issue #651 [1] for details. 13:34:55 # 13:34:55 # [1] 13:34:55 release_this_conn = release_conn 13:34:55 13:34:55 http_tunnel_required = connection_requires_http_tunnel( 13:34:55 self.proxy, self.proxy_config, destination_scheme 13:34:55 ) 13:34:55 13:34:55 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 13:34:55 # have to copy the headers dict so we can safely change it without those 13:34:55 # changes being reflected in anyone else's copy. 13:34:55 if not http_tunnel_required: 13:34:55 headers = headers.copy() # type: ignore[attr-defined] 13:34:55 headers.update(self.proxy_headers) # type: ignore[union-attr] 13:34:55 13:34:55 # Must keep the exception bound to a separate variable or else Python 3 13:34:55 # complains about UnboundLocalError. 13:34:55 err = None 13:34:55 13:34:55 # Keep track of whether we cleanly exited the except block. This 13:34:55 # ensures we do proper cleanup in finally. 13:34:55 clean_exit = False 13:34:55 13:34:55 # Rewind body position, if needed. Record current position 13:34:55 # for future rewinds in the event of a redirect/retry. 13:34:55 body_pos = set_file_position(body, body_pos) 13:34:55 13:34:55 try: 13:34:55 # Request a connection from the queue. 13:34:55 timeout_obj = self._get_timeout(timeout) 13:34:55 conn = self._get_conn(timeout=pool_timeout) 13:34:55 13:34:55 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 13:34:55 13:34:55 # Is this a closed/new connection that requires CONNECT tunnelling? 13:34:55 if self.proxy is not None and http_tunnel_required and conn.is_closed: 13:34:55 try: 13:34:55 self._prepare_proxy(conn) 13:34:55 except (BaseSSLError, OSError, SocketTimeout) as e: 13:34:55 self._raise_timeout( 13:34:55 err=e, url=self.proxy.url, timeout_value=conn.timeout 13:34:55 ) 13:34:55 raise 13:34:55 13:34:55 # If we're going to release the connection in ``finally:``, then 13:34:55 # the response doesn't need to know about the connection. Otherwise 13:34:55 # it will also try to release it and we'll have a double-release 13:34:55 # mess. 13:34:55 response_conn = conn if not release_conn else None 13:34:55 13:34:55 # Make the request on the HTTPConnection object 13:34:55 > response = self._make_request( 13:34:55 conn, 13:34:55 method, 13:34:55 url, 13:34:55 timeout=timeout_obj, 13:34:55 body=body, 13:34:55 headers=headers, 13:34:55 chunked=chunked, 13:34:55 retries=retries, 13:34:55 response_conn=response_conn, 13:34:55 preload_content=preload_content, 13:34:55 decode_content=decode_content, 13:34:55 **response_kw, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 13:34:55 conn.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 13:34:55 self.endheaders() 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 13:34:55 self._send_output(message_body, encode_chunked=encode_chunked) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 13:34:55 self.send(msg) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 13:34:55 self.connect() 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 13:34:55 self.sock = self._new_conn() 13:34:55 ^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 except socket.gaierror as e: 13:34:55 raise NameResolutionError(self.host, self, e) from e 13:34:55 except SocketTimeout as e: 13:34:55 raise ConnectTimeoutError( 13:34:55 self, 13:34:55 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 13:34:55 ) from e 13:34:55 13:34:55 except OSError as e: 13:34:55 > raise NewConnectionError( 13:34:55 self, f"Failed to establish a new connection: {e}" 13:34:55 ) from e 13:34:55 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 > resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 13:34:55 retries = retries.increment( 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG3-PP1-TXRX' 13:34:55 response = None 13:34:55 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 13:34:55 _pool = 13:34:55 _stacktrace = 13:34:55 13:34:55 def increment( 13:34:55 self, 13:34:55 method: str | None = None, 13:34:55 url: str | None = None, 13:34:55 response: BaseHTTPResponse | None = None, 13:34:55 error: Exception | None = None, 13:34:55 _pool: ConnectionPool | None = None, 13:34:55 _stacktrace: TracebackType | None = None, 13:34:55 ) -> Self: 13:34:55 """Return a new Retry object with incremented retry counters. 13:34:55 13:34:55 :param response: A response object, or None, if the server did not 13:34:55 return a response. 13:34:55 :type response: :class:`~urllib3.response.BaseHTTPResponse` 13:34:55 :param Exception error: An error encountered during the request, or 13:34:55 None if the response was received successfully. 13:34:55 13:34:55 :return: A new ``Retry`` object. 13:34:55 """ 13:34:55 if self.total is False and error: 13:34:55 # Disabled, indicate to re-raise the error. 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 13:34:55 total = self.total 13:34:55 if total is not None: 13:34:55 total -= 1 13:34:55 13:34:55 connect = self.connect 13:34:55 read = self.read 13:34:55 redirect = self.redirect 13:34:55 status_count = self.status 13:34:55 other = self.other 13:34:55 cause = "unknown" 13:34:55 status = None 13:34:55 redirect_location = None 13:34:55 13:34:55 if error and self._is_connection_error(error): 13:34:55 # Connect retry? 13:34:55 if connect is False: 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif connect is not None: 13:34:55 connect -= 1 13:34:55 13:34:55 elif error and self._is_read_error(error): 13:34:55 # Read retry? 13:34:55 if read is False or method is None or not self._is_method_retryable(method): 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif read is not None: 13:34:55 read -= 1 13:34:55 13:34:55 elif error: 13:34:55 # Other retry? 13:34:55 if other is not None: 13:34:55 other -= 1 13:34:55 13:34:55 elif response and response.get_redirect_location(): 13:34:55 # Redirect retry? 13:34:55 if redirect is not None: 13:34:55 redirect -= 1 13:34:55 cause = "too many redirects" 13:34:55 response_redirect_location = response.get_redirect_location() 13:34:55 if response_redirect_location: 13:34:55 redirect_location = response_redirect_location 13:34:55 status = response.status 13:34:55 13:34:55 else: 13:34:55 # Incrementing because of a server error like a 500 in 13:34:55 # status_forcelist and the given method is in the allowed_methods 13:34:55 cause = ResponseError.GENERIC_ERROR 13:34:55 if response and response.status: 13:34:55 if status_count is not None: 13:34:55 status_count -= 1 13:34:55 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 13:34:55 status = response.status 13:34:55 13:34:55 history = self.history + ( 13:34:55 RequestHistory(method, url, error, status, redirect_location), 13:34:55 ) 13:34:55 13:34:55 new_retry = self.new( 13:34:55 total=total, 13:34:55 connect=connect, 13:34:55 read=read, 13:34:55 redirect=redirect, 13:34:55 status=status_count, 13:34:55 other=other, 13:34:55 history=history, 13:34:55 ) 13:34:55 13:34:55 if new_retry.is_exhausted(): 13:34:55 reason = error or ResponseError(cause) 13:34:55 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG3-PP1-TXRX (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 13:34:55 13:34:55 During handling of the above exception, another exception occurred: 13:34:55 13:34:55 self = 13:34:55 13:34:55 def test_06_rdm_portmapping_SRG3_PP1_TXRX(self): 13:34:55 > response = test_utils.get_portmapping_node_attr("ROADMA01", "mapping", "SRG3-PP1-TXRX") 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 13:34:55 transportpce_tests/1.2.1/test01_portmapping.py:91: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 13:34:55 response = get_request(target_url) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 transportpce_tests/common/test_utils.py:117: in get_request 13:34:55 return requests.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 13:34:55 return session.request(method=method, url=url, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 13:34:55 resp = self.send(prep, **send_kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 13:34:55 r = adapter.send(request, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 except (ProtocolError, OSError) as err: 13:34:55 raise ConnectionError(err, request=request) 13:34:55 13:34:55 except MaxRetryError as e: 13:34:55 if isinstance(e.reason, ConnectTimeoutError): 13:34:55 # TODO: Remove this in 3.0.0: see #2811 13:34:55 if not isinstance(e.reason, NewConnectionError): 13:34:55 raise ConnectTimeout(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, ResponseError): 13:34:55 raise RetryError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _ProxyError): 13:34:55 raise ProxyError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _SSLError): 13:34:55 # This branch is for urllib3 v1.22 and later. 13:34:55 raise SSLError(e, request=request) 13:34:55 13:34:55 > raise ConnectionError(e, request=request) 13:34:55 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG3-PP1-TXRX (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 13:34:55 ----------------------------- Captured stdout call ----------------------------- 13:34:55 execution of test_06_rdm_portmapping_SRG3_PP1_TXRX 13:34:55 __________ TestTransportPCEPortmapping.test_07_xpdr_device_connection __________ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 > sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 13:34:55 raise err 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 address = ('localhost', 8191), timeout = 30, source_address = None 13:34:55 socket_options = [(6, 1, 1)] 13:34:55 13:34:55 def create_connection( 13:34:55 address: tuple[str, int], 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 source_address: tuple[str, int] | None = None, 13:34:55 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 13:34:55 ) -> socket.socket: 13:34:55 """Connect to *address* and return the socket object. 13:34:55 13:34:55 Convenience function. Connect to *address* (a 2-tuple ``(host, 13:34:55 port)``) and return the socket object. Passing the optional 13:34:55 *timeout* parameter will set the timeout on the socket instance 13:34:55 before attempting to connect. If no *timeout* is supplied, the 13:34:55 global default timeout setting returned by :func:`socket.getdefaulttimeout` 13:34:55 is used. If *source_address* is set it must be a tuple of (host, port) 13:34:55 for the socket to bind as a source address before making the connection. 13:34:55 An host of '' or port 0 tells the OS to use the default. 13:34:55 """ 13:34:55 13:34:55 host, port = address 13:34:55 if host.startswith("["): 13:34:55 host = host.strip("[]") 13:34:55 err = None 13:34:55 13:34:55 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 13:34:55 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 13:34:55 # The original create_connection function always returns all records. 13:34:55 family = allowed_gai_family() 13:34:55 13:34:55 try: 13:34:55 host.encode("idna") 13:34:55 except UnicodeError: 13:34:55 raise LocationParseError(f"'{host}', label empty or too long") from None 13:34:55 13:34:55 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 13:34:55 af, socktype, proto, canonname, sa = res 13:34:55 sock = None 13:34:55 try: 13:34:55 sock = socket.socket(af, socktype, proto) 13:34:55 13:34:55 # If provided, set socket level options before connecting. 13:34:55 _set_socket_options(sock, socket_options) 13:34:55 13:34:55 if timeout is not _DEFAULT_TIMEOUT: 13:34:55 sock.settimeout(timeout) 13:34:55 if source_address: 13:34:55 sock.bind(source_address) 13:34:55 > sock.connect(sa) 13:34:55 E ConnectionRefusedError: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 method = 'PUT' 13:34:55 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01' 13:34:55 body = '{"node": [{"node-id": "XPDRA01", "netconf-node-topology:netconf-node": {"netconf-node-topology:host": "127.0.0.1", "n...ff-millis": 1800000, "netconf-node-topology:backoff-multiplier": 1.5, "netconf-node-topology:keepalive-delay": 120}}]}' 13:34:55 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '709', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 13:34:55 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 redirect = False, assert_same_host = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 13:34:55 release_conn = False, chunked = False, body_pos = None, preload_content = False 13:34:55 decode_content = False, response_kw = {} 13:34:55 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01', query=None, fragment=None) 13:34:55 destination_scheme = None, conn = None, release_this_conn = True 13:34:55 http_tunnel_required = False, err = None, clean_exit = False 13:34:55 13:34:55 def urlopen( # type: ignore[override] 13:34:55 self, 13:34:55 method: str, 13:34:55 url: str, 13:34:55 body: _TYPE_BODY | None = None, 13:34:55 headers: typing.Mapping[str, str] | None = None, 13:34:55 retries: Retry | bool | int | None = None, 13:34:55 redirect: bool = True, 13:34:55 assert_same_host: bool = True, 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 pool_timeout: int | None = None, 13:34:55 release_conn: bool | None = None, 13:34:55 chunked: bool = False, 13:34:55 body_pos: _TYPE_BODY_POSITION | None = None, 13:34:55 preload_content: bool = True, 13:34:55 decode_content: bool = True, 13:34:55 **response_kw: typing.Any, 13:34:55 ) -> BaseHTTPResponse: 13:34:55 """ 13:34:55 Get a connection from the pool and perform an HTTP request. This is the 13:34:55 lowest level call for making a request, so you'll need to specify all 13:34:55 the raw details. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 More commonly, it's appropriate to use a convenience method 13:34:55 such as :meth:`request`. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 `release_conn` will only behave as expected if 13:34:55 `preload_content=False` because we want to make 13:34:55 `preload_content=False` the default behaviour someday soon without 13:34:55 breaking backwards compatibility. 13:34:55 13:34:55 :param method: 13:34:55 HTTP request method (such as GET, POST, PUT, etc.) 13:34:55 13:34:55 :param url: 13:34:55 The URL to perform the request on. 13:34:55 13:34:55 :param body: 13:34:55 Data to send in the request body, either :class:`str`, :class:`bytes`, 13:34:55 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 13:34:55 13:34:55 :param headers: 13:34:55 Dictionary of custom headers to send, such as User-Agent, 13:34:55 If-None-Match, etc. If None, pool headers are used. If provided, 13:34:55 these headers completely replace any pool-specific headers. 13:34:55 13:34:55 :param retries: 13:34:55 Configure the number of retries to allow before raising a 13:34:55 :class:`~urllib3.exceptions.MaxRetryError` exception. 13:34:55 13:34:55 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 13:34:55 :class:`~urllib3.util.retry.Retry` object for fine-grained control 13:34:55 over different types of retries. 13:34:55 Pass an integer number to retry connection errors that many times, 13:34:55 but no other types of errors. Pass zero to never retry. 13:34:55 13:34:55 If ``False``, then retries are disabled and any exception is raised 13:34:55 immediately. Also, instead of raising a MaxRetryError on redirects, 13:34:55 the redirect response will be returned. 13:34:55 13:34:55 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 13:34:55 13:34:55 :param redirect: 13:34:55 If True, automatically handle redirects (status codes 301, 302, 13:34:55 303, 307, 308). Each redirect counts as a retry. Disabling retries 13:34:55 will disable redirect, too. 13:34:55 13:34:55 :param assert_same_host: 13:34:55 If ``True``, will make sure that the host of the pool requests is 13:34:55 consistent else will raise HostChangedError. When ``False``, you can 13:34:55 use the pool on an HTTP proxy and request foreign hosts. 13:34:55 13:34:55 :param timeout: 13:34:55 If specified, overrides the default timeout for this one 13:34:55 request. It may be a float (in seconds) or an instance of 13:34:55 :class:`urllib3.util.Timeout`. 13:34:55 13:34:55 :param pool_timeout: 13:34:55 If set and the pool is set to block=True, then this method will 13:34:55 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 13:34:55 connection is available within the time period. 13:34:55 13:34:55 :param bool preload_content: 13:34:55 If True, the response's body will be preloaded into memory. 13:34:55 13:34:55 :param bool decode_content: 13:34:55 If True, will attempt to decode the body based on the 13:34:55 'content-encoding' header. 13:34:55 13:34:55 :param release_conn: 13:34:55 If False, then the urlopen call will not release the connection 13:34:55 back into the pool once a response is received (but will release if 13:34:55 you read the entire contents of the response such as when 13:34:55 `preload_content=True`). This is useful if you're not preloading 13:34:55 the response's content immediately. You will need to call 13:34:55 ``r.release_conn()`` on the response ``r`` to return the connection 13:34:55 back into the pool. If None, it takes the value of ``preload_content`` 13:34:55 which defaults to ``True``. 13:34:55 13:34:55 :param bool chunked: 13:34:55 If True, urllib3 will send the body using chunked transfer 13:34:55 encoding. Otherwise, urllib3 will send the body using the standard 13:34:55 content-length form. Defaults to False. 13:34:55 13:34:55 :param int body_pos: 13:34:55 Position to seek to in file-like body in the event of a retry or 13:34:55 redirect. Typically this won't need to be set because urllib3 will 13:34:55 auto-populate the value when needed. 13:34:55 """ 13:34:55 parsed_url = parse_url(url) 13:34:55 destination_scheme = parsed_url.scheme 13:34:55 13:34:55 if headers is None: 13:34:55 headers = self.headers 13:34:55 13:34:55 if not isinstance(retries, Retry): 13:34:55 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 13:34:55 13:34:55 if release_conn is None: 13:34:55 release_conn = preload_content 13:34:55 13:34:55 # Check host 13:34:55 if assert_same_host and not self.is_same_host(url): 13:34:55 raise HostChangedError(self, url, retries) 13:34:55 13:34:55 # Ensure that the URL we're connecting to is properly encoded 13:34:55 if url.startswith("/"): 13:34:55 url = to_str(_encode_target(url)) 13:34:55 else: 13:34:55 url = to_str(parsed_url.url) 13:34:55 13:34:55 conn = None 13:34:55 13:34:55 # Track whether `conn` needs to be released before 13:34:55 # returning/raising/recursing. Update this variable if necessary, and 13:34:55 # leave `release_conn` constant throughout the function. That way, if 13:34:55 # the function recurses, the original value of `release_conn` will be 13:34:55 # passed down into the recursive call, and its value will be respected. 13:34:55 # 13:34:55 # See issue #651 [1] for details. 13:34:55 # 13:34:55 # [1] 13:34:55 release_this_conn = release_conn 13:34:55 13:34:55 http_tunnel_required = connection_requires_http_tunnel( 13:34:55 self.proxy, self.proxy_config, destination_scheme 13:34:55 ) 13:34:55 13:34:55 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 13:34:55 # have to copy the headers dict so we can safely change it without those 13:34:55 # changes being reflected in anyone else's copy. 13:34:55 if not http_tunnel_required: 13:34:55 headers = headers.copy() # type: ignore[attr-defined] 13:34:55 headers.update(self.proxy_headers) # type: ignore[union-attr] 13:34:55 13:34:55 # Must keep the exception bound to a separate variable or else Python 3 13:34:55 # complains about UnboundLocalError. 13:34:55 err = None 13:34:55 13:34:55 # Keep track of whether we cleanly exited the except block. This 13:34:55 # ensures we do proper cleanup in finally. 13:34:55 clean_exit = False 13:34:55 13:34:55 # Rewind body position, if needed. Record current position 13:34:55 # for future rewinds in the event of a redirect/retry. 13:34:55 body_pos = set_file_position(body, body_pos) 13:34:55 13:34:55 try: 13:34:55 # Request a connection from the queue. 13:34:55 timeout_obj = self._get_timeout(timeout) 13:34:55 conn = self._get_conn(timeout=pool_timeout) 13:34:55 13:34:55 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 13:34:55 13:34:55 # Is this a closed/new connection that requires CONNECT tunnelling? 13:34:55 if self.proxy is not None and http_tunnel_required and conn.is_closed: 13:34:55 try: 13:34:55 self._prepare_proxy(conn) 13:34:55 except (BaseSSLError, OSError, SocketTimeout) as e: 13:34:55 self._raise_timeout( 13:34:55 err=e, url=self.proxy.url, timeout_value=conn.timeout 13:34:55 ) 13:34:55 raise 13:34:55 13:34:55 # If we're going to release the connection in ``finally:``, then 13:34:55 # the response doesn't need to know about the connection. Otherwise 13:34:55 # it will also try to release it and we'll have a double-release 13:34:55 # mess. 13:34:55 response_conn = conn if not release_conn else None 13:34:55 13:34:55 # Make the request on the HTTPConnection object 13:34:55 > response = self._make_request( 13:34:55 conn, 13:34:55 method, 13:34:55 url, 13:34:55 timeout=timeout_obj, 13:34:55 body=body, 13:34:55 headers=headers, 13:34:55 chunked=chunked, 13:34:55 retries=retries, 13:34:55 response_conn=response_conn, 13:34:55 preload_content=preload_content, 13:34:55 decode_content=decode_content, 13:34:55 **response_kw, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 13:34:55 conn.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 13:34:55 self.endheaders() 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 13:34:55 self._send_output(message_body, encode_chunked=encode_chunked) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 13:34:55 self.send(msg) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 13:34:55 self.connect() 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 13:34:55 self.sock = self._new_conn() 13:34:55 ^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 except socket.gaierror as e: 13:34:55 raise NameResolutionError(self.host, self, e) from e 13:34:55 except SocketTimeout as e: 13:34:55 raise ConnectTimeoutError( 13:34:55 self, 13:34:55 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 13:34:55 ) from e 13:34:55 13:34:55 except OSError as e: 13:34:55 > raise NewConnectionError( 13:34:55 self, f"Failed to establish a new connection: {e}" 13:34:55 ) from e 13:34:55 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 > resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 13:34:55 retries = retries.increment( 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 method = 'PUT' 13:34:55 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01' 13:34:55 response = None 13:34:55 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 13:34:55 _pool = 13:34:55 _stacktrace = 13:34:55 13:34:55 def increment( 13:34:55 self, 13:34:55 method: str | None = None, 13:34:55 url: str | None = None, 13:34:55 response: BaseHTTPResponse | None = None, 13:34:55 error: Exception | None = None, 13:34:55 _pool: ConnectionPool | None = None, 13:34:55 _stacktrace: TracebackType | None = None, 13:34:55 ) -> Self: 13:34:55 """Return a new Retry object with incremented retry counters. 13:34:55 13:34:55 :param response: A response object, or None, if the server did not 13:34:55 return a response. 13:34:55 :type response: :class:`~urllib3.response.BaseHTTPResponse` 13:34:55 :param Exception error: An error encountered during the request, or 13:34:55 None if the response was received successfully. 13:34:55 13:34:55 :return: A new ``Retry`` object. 13:34:55 """ 13:34:55 if self.total is False and error: 13:34:55 # Disabled, indicate to re-raise the error. 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 13:34:55 total = self.total 13:34:55 if total is not None: 13:34:55 total -= 1 13:34:55 13:34:55 connect = self.connect 13:34:55 read = self.read 13:34:55 redirect = self.redirect 13:34:55 status_count = self.status 13:34:55 other = self.other 13:34:55 cause = "unknown" 13:34:55 status = None 13:34:55 redirect_location = None 13:34:55 13:34:55 if error and self._is_connection_error(error): 13:34:55 # Connect retry? 13:34:55 if connect is False: 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif connect is not None: 13:34:55 connect -= 1 13:34:55 13:34:55 elif error and self._is_read_error(error): 13:34:55 # Read retry? 13:34:55 if read is False or method is None or not self._is_method_retryable(method): 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif read is not None: 13:34:55 read -= 1 13:34:55 13:34:55 elif error: 13:34:55 # Other retry? 13:34:55 if other is not None: 13:34:55 other -= 1 13:34:55 13:34:55 elif response and response.get_redirect_location(): 13:34:55 # Redirect retry? 13:34:55 if redirect is not None: 13:34:55 redirect -= 1 13:34:55 cause = "too many redirects" 13:34:55 response_redirect_location = response.get_redirect_location() 13:34:55 if response_redirect_location: 13:34:55 redirect_location = response_redirect_location 13:34:55 status = response.status 13:34:55 13:34:55 else: 13:34:55 # Incrementing because of a server error like a 500 in 13:34:55 # status_forcelist and the given method is in the allowed_methods 13:34:55 cause = ResponseError.GENERIC_ERROR 13:34:55 if response and response.status: 13:34:55 if status_count is not None: 13:34:55 status_count -= 1 13:34:55 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 13:34:55 status = response.status 13:34:55 13:34:55 history = self.history + ( 13:34:55 RequestHistory(method, url, error, status, redirect_location), 13:34:55 ) 13:34:55 13:34:55 new_retry = self.new( 13:34:55 total=total, 13:34:55 connect=connect, 13:34:55 read=read, 13:34:55 redirect=redirect, 13:34:55 status=status_count, 13:34:55 other=other, 13:34:55 history=history, 13:34:55 ) 13:34:55 13:34:55 if new_retry.is_exhausted(): 13:34:55 reason = error or ResponseError(cause) 13:34:55 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 13:34:55 13:34:55 During handling of the above exception, another exception occurred: 13:34:55 13:34:55 self = 13:34:55 13:34:55 def test_07_xpdr_device_connection(self): 13:34:55 > response = test_utils.mount_device("XPDRA01", ('xpdra', self.NODE_VERSION)) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 13:34:55 transportpce_tests/1.2.1/test01_portmapping.py:100: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 transportpce_tests/common/test_utils.py:381: in mount_device 13:34:55 response = put_request(url[RESTCONF_VERSION].format('{}', node), body) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 transportpce_tests/common/test_utils.py:125: in put_request 13:34:55 return requests.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 13:34:55 return session.request(method=method, url=url, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 13:34:55 resp = self.send(prep, **send_kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 13:34:55 r = adapter.send(request, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 except (ProtocolError, OSError) as err: 13:34:55 raise ConnectionError(err, request=request) 13:34:55 13:34:55 except MaxRetryError as e: 13:34:55 if isinstance(e.reason, ConnectTimeoutError): 13:34:55 # TODO: Remove this in 3.0.0: see #2811 13:34:55 if not isinstance(e.reason, NewConnectionError): 13:34:55 raise ConnectTimeout(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, ResponseError): 13:34:55 raise RetryError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _ProxyError): 13:34:55 raise ProxyError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _SSLError): 13:34:55 # This branch is for urllib3 v1.22 and later. 13:34:55 raise SSLError(e, request=request) 13:34:55 13:34:55 > raise ConnectionError(e, request=request) 13:34:55 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 13:34:55 ----------------------------- Captured stdout call ----------------------------- 13:34:55 execution of test_07_xpdr_device_connection 13:34:55 __________ TestTransportPCEPortmapping.test_08_xpdr_device_connected ___________ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 > sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 13:34:55 raise err 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 address = ('localhost', 8191), timeout = 30, source_address = None 13:34:55 socket_options = [(6, 1, 1)] 13:34:55 13:34:55 def create_connection( 13:34:55 address: tuple[str, int], 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 source_address: tuple[str, int] | None = None, 13:34:55 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 13:34:55 ) -> socket.socket: 13:34:55 """Connect to *address* and return the socket object. 13:34:55 13:34:55 Convenience function. Connect to *address* (a 2-tuple ``(host, 13:34:55 port)``) and return the socket object. Passing the optional 13:34:55 *timeout* parameter will set the timeout on the socket instance 13:34:55 before attempting to connect. If no *timeout* is supplied, the 13:34:55 global default timeout setting returned by :func:`socket.getdefaulttimeout` 13:34:55 is used. If *source_address* is set it must be a tuple of (host, port) 13:34:55 for the socket to bind as a source address before making the connection. 13:34:55 An host of '' or port 0 tells the OS to use the default. 13:34:55 """ 13:34:55 13:34:55 host, port = address 13:34:55 if host.startswith("["): 13:34:55 host = host.strip("[]") 13:34:55 err = None 13:34:55 13:34:55 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 13:34:55 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 13:34:55 # The original create_connection function always returns all records. 13:34:55 family = allowed_gai_family() 13:34:55 13:34:55 try: 13:34:55 host.encode("idna") 13:34:55 except UnicodeError: 13:34:55 raise LocationParseError(f"'{host}', label empty or too long") from None 13:34:55 13:34:55 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 13:34:55 af, socktype, proto, canonname, sa = res 13:34:55 sock = None 13:34:55 try: 13:34:55 sock = socket.socket(af, socktype, proto) 13:34:55 13:34:55 # If provided, set socket level options before connecting. 13:34:55 _set_socket_options(sock, socket_options) 13:34:55 13:34:55 if timeout is not _DEFAULT_TIMEOUT: 13:34:55 sock.settimeout(timeout) 13:34:55 if source_address: 13:34:55 sock.bind(source_address) 13:34:55 > sock.connect(sa) 13:34:55 E ConnectionRefusedError: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig' 13:34:55 body = None 13:34:55 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 13:34:55 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 redirect = False, assert_same_host = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 13:34:55 release_conn = False, chunked = False, body_pos = None, preload_content = False 13:34:55 decode_content = False, response_kw = {} 13:34:55 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01', query='content=nonconfig', fragment=None) 13:34:55 destination_scheme = None, conn = None, release_this_conn = True 13:34:55 http_tunnel_required = False, err = None, clean_exit = False 13:34:55 13:34:55 def urlopen( # type: ignore[override] 13:34:55 self, 13:34:55 method: str, 13:34:55 url: str, 13:34:55 body: _TYPE_BODY | None = None, 13:34:55 headers: typing.Mapping[str, str] | None = None, 13:34:55 retries: Retry | bool | int | None = None, 13:34:55 redirect: bool = True, 13:34:55 assert_same_host: bool = True, 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 pool_timeout: int | None = None, 13:34:55 release_conn: bool | None = None, 13:34:55 chunked: bool = False, 13:34:55 body_pos: _TYPE_BODY_POSITION | None = None, 13:34:55 preload_content: bool = True, 13:34:55 decode_content: bool = True, 13:34:55 **response_kw: typing.Any, 13:34:55 ) -> BaseHTTPResponse: 13:34:55 """ 13:34:55 Get a connection from the pool and perform an HTTP request. This is the 13:34:55 lowest level call for making a request, so you'll need to specify all 13:34:55 the raw details. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 More commonly, it's appropriate to use a convenience method 13:34:55 such as :meth:`request`. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 `release_conn` will only behave as expected if 13:34:55 `preload_content=False` because we want to make 13:34:55 `preload_content=False` the default behaviour someday soon without 13:34:55 breaking backwards compatibility. 13:34:55 13:34:55 :param method: 13:34:55 HTTP request method (such as GET, POST, PUT, etc.) 13:34:55 13:34:55 :param url: 13:34:55 The URL to perform the request on. 13:34:55 13:34:55 :param body: 13:34:55 Data to send in the request body, either :class:`str`, :class:`bytes`, 13:34:55 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 13:34:55 13:34:55 :param headers: 13:34:55 Dictionary of custom headers to send, such as User-Agent, 13:34:55 If-None-Match, etc. If None, pool headers are used. If provided, 13:34:55 these headers completely replace any pool-specific headers. 13:34:55 13:34:55 :param retries: 13:34:55 Configure the number of retries to allow before raising a 13:34:55 :class:`~urllib3.exceptions.MaxRetryError` exception. 13:34:55 13:34:55 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 13:34:55 :class:`~urllib3.util.retry.Retry` object for fine-grained control 13:34:55 over different types of retries. 13:34:55 Pass an integer number to retry connection errors that many times, 13:34:55 but no other types of errors. Pass zero to never retry. 13:34:55 13:34:55 If ``False``, then retries are disabled and any exception is raised 13:34:55 immediately. Also, instead of raising a MaxRetryError on redirects, 13:34:55 the redirect response will be returned. 13:34:55 13:34:55 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 13:34:55 13:34:55 :param redirect: 13:34:55 If True, automatically handle redirects (status codes 301, 302, 13:34:55 303, 307, 308). Each redirect counts as a retry. Disabling retries 13:34:55 will disable redirect, too. 13:34:55 13:34:55 :param assert_same_host: 13:34:55 If ``True``, will make sure that the host of the pool requests is 13:34:55 consistent else will raise HostChangedError. When ``False``, you can 13:34:55 use the pool on an HTTP proxy and request foreign hosts. 13:34:55 13:34:55 :param timeout: 13:34:55 If specified, overrides the default timeout for this one 13:34:55 request. It may be a float (in seconds) or an instance of 13:34:55 :class:`urllib3.util.Timeout`. 13:34:55 13:34:55 :param pool_timeout: 13:34:55 If set and the pool is set to block=True, then this method will 13:34:55 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 13:34:55 connection is available within the time period. 13:34:55 13:34:55 :param bool preload_content: 13:34:55 If True, the response's body will be preloaded into memory. 13:34:55 13:34:55 :param bool decode_content: 13:34:55 If True, will attempt to decode the body based on the 13:34:55 'content-encoding' header. 13:34:55 13:34:55 :param release_conn: 13:34:55 If False, then the urlopen call will not release the connection 13:34:55 back into the pool once a response is received (but will release if 13:34:55 you read the entire contents of the response such as when 13:34:55 `preload_content=True`). This is useful if you're not preloading 13:34:55 the response's content immediately. You will need to call 13:34:55 ``r.release_conn()`` on the response ``r`` to return the connection 13:34:55 back into the pool. If None, it takes the value of ``preload_content`` 13:34:55 which defaults to ``True``. 13:34:55 13:34:55 :param bool chunked: 13:34:55 If True, urllib3 will send the body using chunked transfer 13:34:55 encoding. Otherwise, urllib3 will send the body using the standard 13:34:55 content-length form. Defaults to False. 13:34:55 13:34:55 :param int body_pos: 13:34:55 Position to seek to in file-like body in the event of a retry or 13:34:55 redirect. Typically this won't need to be set because urllib3 will 13:34:55 auto-populate the value when needed. 13:34:55 """ 13:34:55 parsed_url = parse_url(url) 13:34:55 destination_scheme = parsed_url.scheme 13:34:55 13:34:55 if headers is None: 13:34:55 headers = self.headers 13:34:55 13:34:55 if not isinstance(retries, Retry): 13:34:55 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 13:34:55 13:34:55 if release_conn is None: 13:34:55 release_conn = preload_content 13:34:55 13:34:55 # Check host 13:34:55 if assert_same_host and not self.is_same_host(url): 13:34:55 raise HostChangedError(self, url, retries) 13:34:55 13:34:55 # Ensure that the URL we're connecting to is properly encoded 13:34:55 if url.startswith("/"): 13:34:55 url = to_str(_encode_target(url)) 13:34:55 else: 13:34:55 url = to_str(parsed_url.url) 13:34:55 13:34:55 conn = None 13:34:55 13:34:55 # Track whether `conn` needs to be released before 13:34:55 # returning/raising/recursing. Update this variable if necessary, and 13:34:55 # leave `release_conn` constant throughout the function. That way, if 13:34:55 # the function recurses, the original value of `release_conn` will be 13:34:55 # passed down into the recursive call, and its value will be respected. 13:34:55 # 13:34:55 # See issue #651 [1] for details. 13:34:55 # 13:34:55 # [1] 13:34:55 release_this_conn = release_conn 13:34:55 13:34:55 http_tunnel_required = connection_requires_http_tunnel( 13:34:55 self.proxy, self.proxy_config, destination_scheme 13:34:55 ) 13:34:55 13:34:55 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 13:34:55 # have to copy the headers dict so we can safely change it without those 13:34:55 # changes being reflected in anyone else's copy. 13:34:55 if not http_tunnel_required: 13:34:55 headers = headers.copy() # type: ignore[attr-defined] 13:34:55 headers.update(self.proxy_headers) # type: ignore[union-attr] 13:34:55 13:34:55 # Must keep the exception bound to a separate variable or else Python 3 13:34:55 # complains about UnboundLocalError. 13:34:55 err = None 13:34:55 13:34:55 # Keep track of whether we cleanly exited the except block. This 13:34:55 # ensures we do proper cleanup in finally. 13:34:55 clean_exit = False 13:34:55 13:34:55 # Rewind body position, if needed. Record current position 13:34:55 # for future rewinds in the event of a redirect/retry. 13:34:55 body_pos = set_file_position(body, body_pos) 13:34:55 13:34:55 try: 13:34:55 # Request a connection from the queue. 13:34:55 timeout_obj = self._get_timeout(timeout) 13:34:55 conn = self._get_conn(timeout=pool_timeout) 13:34:55 13:34:55 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 13:34:55 13:34:55 # Is this a closed/new connection that requires CONNECT tunnelling? 13:34:55 if self.proxy is not None and http_tunnel_required and conn.is_closed: 13:34:55 try: 13:34:55 self._prepare_proxy(conn) 13:34:55 except (BaseSSLError, OSError, SocketTimeout) as e: 13:34:55 self._raise_timeout( 13:34:55 err=e, url=self.proxy.url, timeout_value=conn.timeout 13:34:55 ) 13:34:55 raise 13:34:55 13:34:55 # If we're going to release the connection in ``finally:``, then 13:34:55 # the response doesn't need to know about the connection. Otherwise 13:34:55 # it will also try to release it and we'll have a double-release 13:34:55 # mess. 13:34:55 response_conn = conn if not release_conn else None 13:34:55 13:34:55 # Make the request on the HTTPConnection object 13:34:55 > response = self._make_request( 13:34:55 conn, 13:34:55 method, 13:34:55 url, 13:34:55 timeout=timeout_obj, 13:34:55 body=body, 13:34:55 headers=headers, 13:34:55 chunked=chunked, 13:34:55 retries=retries, 13:34:55 response_conn=response_conn, 13:34:55 preload_content=preload_content, 13:34:55 decode_content=decode_content, 13:34:55 **response_kw, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 13:34:55 conn.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 13:34:55 self.endheaders() 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 13:34:55 self._send_output(message_body, encode_chunked=encode_chunked) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 13:34:55 self.send(msg) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 13:34:55 self.connect() 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 13:34:55 self.sock = self._new_conn() 13:34:55 ^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 except socket.gaierror as e: 13:34:55 raise NameResolutionError(self.host, self, e) from e 13:34:55 except SocketTimeout as e: 13:34:55 raise ConnectTimeoutError( 13:34:55 self, 13:34:55 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 13:34:55 ) from e 13:34:55 13:34:55 except OSError as e: 13:34:55 > raise NewConnectionError( 13:34:55 self, f"Failed to establish a new connection: {e}" 13:34:55 ) from e 13:34:55 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 > resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 13:34:55 retries = retries.increment( 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig' 13:34:55 response = None 13:34:55 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 13:34:55 _pool = 13:34:55 _stacktrace = 13:34:55 13:34:55 def increment( 13:34:55 self, 13:34:55 method: str | None = None, 13:34:55 url: str | None = None, 13:34:55 response: BaseHTTPResponse | None = None, 13:34:55 error: Exception | None = None, 13:34:55 _pool: ConnectionPool | None = None, 13:34:55 _stacktrace: TracebackType | None = None, 13:34:55 ) -> Self: 13:34:55 """Return a new Retry object with incremented retry counters. 13:34:55 13:34:55 :param response: A response object, or None, if the server did not 13:34:55 return a response. 13:34:55 :type response: :class:`~urllib3.response.BaseHTTPResponse` 13:34:55 :param Exception error: An error encountered during the request, or 13:34:55 None if the response was received successfully. 13:34:55 13:34:55 :return: A new ``Retry`` object. 13:34:55 """ 13:34:55 if self.total is False and error: 13:34:55 # Disabled, indicate to re-raise the error. 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 13:34:55 total = self.total 13:34:55 if total is not None: 13:34:55 total -= 1 13:34:55 13:34:55 connect = self.connect 13:34:55 read = self.read 13:34:55 redirect = self.redirect 13:34:55 status_count = self.status 13:34:55 other = self.other 13:34:55 cause = "unknown" 13:34:55 status = None 13:34:55 redirect_location = None 13:34:55 13:34:55 if error and self._is_connection_error(error): 13:34:55 # Connect retry? 13:34:55 if connect is False: 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif connect is not None: 13:34:55 connect -= 1 13:34:55 13:34:55 elif error and self._is_read_error(error): 13:34:55 # Read retry? 13:34:55 if read is False or method is None or not self._is_method_retryable(method): 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif read is not None: 13:34:55 read -= 1 13:34:55 13:34:55 elif error: 13:34:55 # Other retry? 13:34:55 if other is not None: 13:34:55 other -= 1 13:34:55 13:34:55 elif response and response.get_redirect_location(): 13:34:55 # Redirect retry? 13:34:55 if redirect is not None: 13:34:55 redirect -= 1 13:34:55 cause = "too many redirects" 13:34:55 response_redirect_location = response.get_redirect_location() 13:34:55 if response_redirect_location: 13:34:55 redirect_location = response_redirect_location 13:34:55 status = response.status 13:34:55 13:34:55 else: 13:34:55 # Incrementing because of a server error like a 500 in 13:34:55 # status_forcelist and the given method is in the allowed_methods 13:34:55 cause = ResponseError.GENERIC_ERROR 13:34:55 if response and response.status: 13:34:55 if status_count is not None: 13:34:55 status_count -= 1 13:34:55 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 13:34:55 status = response.status 13:34:55 13:34:55 history = self.history + ( 13:34:55 RequestHistory(method, url, error, status, redirect_location), 13:34:55 ) 13:34:55 13:34:55 new_retry = self.new( 13:34:55 total=total, 13:34:55 connect=connect, 13:34:55 read=read, 13:34:55 redirect=redirect, 13:34:55 status=status_count, 13:34:55 other=other, 13:34:55 history=history, 13:34:55 ) 13:34:55 13:34:55 if new_retry.is_exhausted(): 13:34:55 reason = error or ResponseError(cause) 13:34:55 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 13:34:55 13:34:55 During handling of the above exception, another exception occurred: 13:34:55 13:34:55 self = 13:34:55 13:34:55 def test_08_xpdr_device_connected(self): 13:34:55 > response = test_utils.check_device_connection("XPDRA01") 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 13:34:55 transportpce_tests/1.2.1/test01_portmapping.py:104: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 transportpce_tests/common/test_utils.py:409: in check_device_connection 13:34:55 response = get_request(url[RESTCONF_VERSION].format('{}', node)) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 transportpce_tests/common/test_utils.py:117: in get_request 13:34:55 return requests.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 13:34:55 return session.request(method=method, url=url, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 13:34:55 resp = self.send(prep, **send_kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 13:34:55 r = adapter.send(request, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 except (ProtocolError, OSError) as err: 13:34:55 raise ConnectionError(err, request=request) 13:34:55 13:34:55 except MaxRetryError as e: 13:34:55 if isinstance(e.reason, ConnectTimeoutError): 13:34:55 # TODO: Remove this in 3.0.0: see #2811 13:34:55 if not isinstance(e.reason, NewConnectionError): 13:34:55 raise ConnectTimeout(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, ResponseError): 13:34:55 raise RetryError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _ProxyError): 13:34:55 raise ProxyError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _SSLError): 13:34:55 # This branch is for urllib3 v1.22 and later. 13:34:55 raise SSLError(e, request=request) 13:34:55 13:34:55 > raise ConnectionError(e, request=request) 13:34:55 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 13:34:55 ----------------------------- Captured stdout call ----------------------------- 13:34:55 execution of test_08_xpdr_device_connected 13:34:55 __________ TestTransportPCEPortmapping.test_09_xpdr_portmapping_info ___________ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 > sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 13:34:55 raise err 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 address = ('localhost', 8191), timeout = 30, source_address = None 13:34:55 socket_options = [(6, 1, 1)] 13:34:55 13:34:55 def create_connection( 13:34:55 address: tuple[str, int], 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 source_address: tuple[str, int] | None = None, 13:34:55 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 13:34:55 ) -> socket.socket: 13:34:55 """Connect to *address* and return the socket object. 13:34:55 13:34:55 Convenience function. Connect to *address* (a 2-tuple ``(host, 13:34:55 port)``) and return the socket object. Passing the optional 13:34:55 *timeout* parameter will set the timeout on the socket instance 13:34:55 before attempting to connect. If no *timeout* is supplied, the 13:34:55 global default timeout setting returned by :func:`socket.getdefaulttimeout` 13:34:55 is used. If *source_address* is set it must be a tuple of (host, port) 13:34:55 for the socket to bind as a source address before making the connection. 13:34:55 An host of '' or port 0 tells the OS to use the default. 13:34:55 """ 13:34:55 13:34:55 host, port = address 13:34:55 if host.startswith("["): 13:34:55 host = host.strip("[]") 13:34:55 err = None 13:34:55 13:34:55 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 13:34:55 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 13:34:55 # The original create_connection function always returns all records. 13:34:55 family = allowed_gai_family() 13:34:55 13:34:55 try: 13:34:55 host.encode("idna") 13:34:55 except UnicodeError: 13:34:55 raise LocationParseError(f"'{host}', label empty or too long") from None 13:34:55 13:34:55 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 13:34:55 af, socktype, proto, canonname, sa = res 13:34:55 sock = None 13:34:55 try: 13:34:55 sock = socket.socket(af, socktype, proto) 13:34:55 13:34:55 # If provided, set socket level options before connecting. 13:34:55 _set_socket_options(sock, socket_options) 13:34:55 13:34:55 if timeout is not _DEFAULT_TIMEOUT: 13:34:55 sock.settimeout(timeout) 13:34:55 if source_address: 13:34:55 sock.bind(source_address) 13:34:55 > sock.connect(sa) 13:34:55 E ConnectionRefusedError: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info' 13:34:55 body = None 13:34:55 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 13:34:55 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 redirect = False, assert_same_host = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 13:34:55 release_conn = False, chunked = False, body_pos = None, preload_content = False 13:34:55 decode_content = False, response_kw = {} 13:34:55 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info', query=None, fragment=None) 13:34:55 destination_scheme = None, conn = None, release_this_conn = True 13:34:55 http_tunnel_required = False, err = None, clean_exit = False 13:34:55 13:34:55 def urlopen( # type: ignore[override] 13:34:55 self, 13:34:55 method: str, 13:34:55 url: str, 13:34:55 body: _TYPE_BODY | None = None, 13:34:55 headers: typing.Mapping[str, str] | None = None, 13:34:55 retries: Retry | bool | int | None = None, 13:34:55 redirect: bool = True, 13:34:55 assert_same_host: bool = True, 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 pool_timeout: int | None = None, 13:34:55 release_conn: bool | None = None, 13:34:55 chunked: bool = False, 13:34:55 body_pos: _TYPE_BODY_POSITION | None = None, 13:34:55 preload_content: bool = True, 13:34:55 decode_content: bool = True, 13:34:55 **response_kw: typing.Any, 13:34:55 ) -> BaseHTTPResponse: 13:34:55 """ 13:34:55 Get a connection from the pool and perform an HTTP request. This is the 13:34:55 lowest level call for making a request, so you'll need to specify all 13:34:55 the raw details. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 More commonly, it's appropriate to use a convenience method 13:34:55 such as :meth:`request`. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 `release_conn` will only behave as expected if 13:34:55 `preload_content=False` because we want to make 13:34:55 `preload_content=False` the default behaviour someday soon without 13:34:55 breaking backwards compatibility. 13:34:55 13:34:55 :param method: 13:34:55 HTTP request method (such as GET, POST, PUT, etc.) 13:34:55 13:34:55 :param url: 13:34:55 The URL to perform the request on. 13:34:55 13:34:55 :param body: 13:34:55 Data to send in the request body, either :class:`str`, :class:`bytes`, 13:34:55 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 13:34:55 13:34:55 :param headers: 13:34:55 Dictionary of custom headers to send, such as User-Agent, 13:34:55 If-None-Match, etc. If None, pool headers are used. If provided, 13:34:55 these headers completely replace any pool-specific headers. 13:34:55 13:34:55 :param retries: 13:34:55 Configure the number of retries to allow before raising a 13:34:55 :class:`~urllib3.exceptions.MaxRetryError` exception. 13:34:55 13:34:55 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 13:34:55 :class:`~urllib3.util.retry.Retry` object for fine-grained control 13:34:55 over different types of retries. 13:34:55 Pass an integer number to retry connection errors that many times, 13:34:55 but no other types of errors. Pass zero to never retry. 13:34:55 13:34:55 If ``False``, then retries are disabled and any exception is raised 13:34:55 immediately. Also, instead of raising a MaxRetryError on redirects, 13:34:55 the redirect response will be returned. 13:34:55 13:34:55 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 13:34:55 13:34:55 :param redirect: 13:34:55 If True, automatically handle redirects (status codes 301, 302, 13:34:55 303, 307, 308). Each redirect counts as a retry. Disabling retries 13:34:55 will disable redirect, too. 13:34:55 13:34:55 :param assert_same_host: 13:34:55 If ``True``, will make sure that the host of the pool requests is 13:34:55 consistent else will raise HostChangedError. When ``False``, you can 13:34:55 use the pool on an HTTP proxy and request foreign hosts. 13:34:55 13:34:55 :param timeout: 13:34:55 If specified, overrides the default timeout for this one 13:34:55 request. It may be a float (in seconds) or an instance of 13:34:55 :class:`urllib3.util.Timeout`. 13:34:55 13:34:55 :param pool_timeout: 13:34:55 If set and the pool is set to block=True, then this method will 13:34:55 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 13:34:55 connection is available within the time period. 13:34:55 13:34:55 :param bool preload_content: 13:34:55 If True, the response's body will be preloaded into memory. 13:34:55 13:34:55 :param bool decode_content: 13:34:55 If True, will attempt to decode the body based on the 13:34:55 'content-encoding' header. 13:34:55 13:34:55 :param release_conn: 13:34:55 If False, then the urlopen call will not release the connection 13:34:55 back into the pool once a response is received (but will release if 13:34:55 you read the entire contents of the response such as when 13:34:55 `preload_content=True`). This is useful if you're not preloading 13:34:55 the response's content immediately. You will need to call 13:34:55 ``r.release_conn()`` on the response ``r`` to return the connection 13:34:55 back into the pool. If None, it takes the value of ``preload_content`` 13:34:55 which defaults to ``True``. 13:34:55 13:34:55 :param bool chunked: 13:34:55 If True, urllib3 will send the body using chunked transfer 13:34:55 encoding. Otherwise, urllib3 will send the body using the standard 13:34:55 content-length form. Defaults to False. 13:34:55 13:34:55 :param int body_pos: 13:34:55 Position to seek to in file-like body in the event of a retry or 13:34:55 redirect. Typically this won't need to be set because urllib3 will 13:34:55 auto-populate the value when needed. 13:34:55 """ 13:34:55 parsed_url = parse_url(url) 13:34:55 destination_scheme = parsed_url.scheme 13:34:55 13:34:55 if headers is None: 13:34:55 headers = self.headers 13:34:55 13:34:55 if not isinstance(retries, Retry): 13:34:55 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 13:34:55 13:34:55 if release_conn is None: 13:34:55 release_conn = preload_content 13:34:55 13:34:55 # Check host 13:34:55 if assert_same_host and not self.is_same_host(url): 13:34:55 raise HostChangedError(self, url, retries) 13:34:55 13:34:55 # Ensure that the URL we're connecting to is properly encoded 13:34:55 if url.startswith("/"): 13:34:55 url = to_str(_encode_target(url)) 13:34:55 else: 13:34:55 url = to_str(parsed_url.url) 13:34:55 13:34:55 conn = None 13:34:55 13:34:55 # Track whether `conn` needs to be released before 13:34:55 # returning/raising/recursing. Update this variable if necessary, and 13:34:55 # leave `release_conn` constant throughout the function. That way, if 13:34:55 # the function recurses, the original value of `release_conn` will be 13:34:55 # passed down into the recursive call, and its value will be respected. 13:34:55 # 13:34:55 # See issue #651 [1] for details. 13:34:55 # 13:34:55 # [1] 13:34:55 release_this_conn = release_conn 13:34:55 13:34:55 http_tunnel_required = connection_requires_http_tunnel( 13:34:55 self.proxy, self.proxy_config, destination_scheme 13:34:55 ) 13:34:55 13:34:55 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 13:34:55 # have to copy the headers dict so we can safely change it without those 13:34:55 # changes being reflected in anyone else's copy. 13:34:55 if not http_tunnel_required: 13:34:55 headers = headers.copy() # type: ignore[attr-defined] 13:34:55 headers.update(self.proxy_headers) # type: ignore[union-attr] 13:34:55 13:34:55 # Must keep the exception bound to a separate variable or else Python 3 13:34:55 # complains about UnboundLocalError. 13:34:55 err = None 13:34:55 13:34:55 # Keep track of whether we cleanly exited the except block. This 13:34:55 # ensures we do proper cleanup in finally. 13:34:55 clean_exit = False 13:34:55 13:34:55 # Rewind body position, if needed. Record current position 13:34:55 # for future rewinds in the event of a redirect/retry. 13:34:55 body_pos = set_file_position(body, body_pos) 13:34:55 13:34:55 try: 13:34:55 # Request a connection from the queue. 13:34:55 timeout_obj = self._get_timeout(timeout) 13:34:55 conn = self._get_conn(timeout=pool_timeout) 13:34:55 13:34:55 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 13:34:55 13:34:55 # Is this a closed/new connection that requires CONNECT tunnelling? 13:34:55 if self.proxy is not None and http_tunnel_required and conn.is_closed: 13:34:55 try: 13:34:55 self._prepare_proxy(conn) 13:34:55 except (BaseSSLError, OSError, SocketTimeout) as e: 13:34:55 self._raise_timeout( 13:34:55 err=e, url=self.proxy.url, timeout_value=conn.timeout 13:34:55 ) 13:34:55 raise 13:34:55 13:34:55 # If we're going to release the connection in ``finally:``, then 13:34:55 # the response doesn't need to know about the connection. Otherwise 13:34:55 # it will also try to release it and we'll have a double-release 13:34:55 # mess. 13:34:55 response_conn = conn if not release_conn else None 13:34:55 13:34:55 # Make the request on the HTTPConnection object 13:34:55 > response = self._make_request( 13:34:55 conn, 13:34:55 method, 13:34:55 url, 13:34:55 timeout=timeout_obj, 13:34:55 body=body, 13:34:55 headers=headers, 13:34:55 chunked=chunked, 13:34:55 retries=retries, 13:34:55 response_conn=response_conn, 13:34:55 preload_content=preload_content, 13:34:55 decode_content=decode_content, 13:34:55 **response_kw, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 13:34:55 conn.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 13:34:55 self.endheaders() 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 13:34:55 self._send_output(message_body, encode_chunked=encode_chunked) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 13:34:55 self.send(msg) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 13:34:55 self.connect() 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 13:34:55 self.sock = self._new_conn() 13:34:55 ^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 except socket.gaierror as e: 13:34:55 raise NameResolutionError(self.host, self, e) from e 13:34:55 except SocketTimeout as e: 13:34:55 raise ConnectTimeoutError( 13:34:55 self, 13:34:55 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 13:34:55 ) from e 13:34:55 13:34:55 except OSError as e: 13:34:55 > raise NewConnectionError( 13:34:55 self, f"Failed to establish a new connection: {e}" 13:34:55 ) from e 13:34:55 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 > resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 13:34:55 retries = retries.increment( 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info' 13:34:55 response = None 13:34:55 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 13:34:55 _pool = 13:34:55 _stacktrace = 13:34:55 13:34:55 def increment( 13:34:55 self, 13:34:55 method: str | None = None, 13:34:55 url: str | None = None, 13:34:55 response: BaseHTTPResponse | None = None, 13:34:55 error: Exception | None = None, 13:34:55 _pool: ConnectionPool | None = None, 13:34:55 _stacktrace: TracebackType | None = None, 13:34:55 ) -> Self: 13:34:55 """Return a new Retry object with incremented retry counters. 13:34:55 13:34:55 :param response: A response object, or None, if the server did not 13:34:55 return a response. 13:34:55 :type response: :class:`~urllib3.response.BaseHTTPResponse` 13:34:55 :param Exception error: An error encountered during the request, or 13:34:55 None if the response was received successfully. 13:34:55 13:34:55 :return: A new ``Retry`` object. 13:34:55 """ 13:34:55 if self.total is False and error: 13:34:55 # Disabled, indicate to re-raise the error. 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 13:34:55 total = self.total 13:34:55 if total is not None: 13:34:55 total -= 1 13:34:55 13:34:55 connect = self.connect 13:34:55 read = self.read 13:34:55 redirect = self.redirect 13:34:55 status_count = self.status 13:34:55 other = self.other 13:34:55 cause = "unknown" 13:34:55 status = None 13:34:55 redirect_location = None 13:34:55 13:34:55 if error and self._is_connection_error(error): 13:34:55 # Connect retry? 13:34:55 if connect is False: 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif connect is not None: 13:34:55 connect -= 1 13:34:55 13:34:55 elif error and self._is_read_error(error): 13:34:55 # Read retry? 13:34:55 if read is False or method is None or not self._is_method_retryable(method): 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif read is not None: 13:34:55 read -= 1 13:34:55 13:34:55 elif error: 13:34:55 # Other retry? 13:34:55 if other is not None: 13:34:55 other -= 1 13:34:55 13:34:55 elif response and response.get_redirect_location(): 13:34:55 # Redirect retry? 13:34:55 if redirect is not None: 13:34:55 redirect -= 1 13:34:55 cause = "too many redirects" 13:34:55 response_redirect_location = response.get_redirect_location() 13:34:55 if response_redirect_location: 13:34:55 redirect_location = response_redirect_location 13:34:55 status = response.status 13:34:55 13:34:55 else: 13:34:55 # Incrementing because of a server error like a 500 in 13:34:55 # status_forcelist and the given method is in the allowed_methods 13:34:55 cause = ResponseError.GENERIC_ERROR 13:34:55 if response and response.status: 13:34:55 if status_count is not None: 13:34:55 status_count -= 1 13:34:55 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 13:34:55 status = response.status 13:34:55 13:34:55 history = self.history + ( 13:34:55 RequestHistory(method, url, error, status, redirect_location), 13:34:55 ) 13:34:55 13:34:55 new_retry = self.new( 13:34:55 total=total, 13:34:55 connect=connect, 13:34:55 read=read, 13:34:55 redirect=redirect, 13:34:55 status=status_count, 13:34:55 other=other, 13:34:55 history=history, 13:34:55 ) 13:34:55 13:34:55 if new_retry.is_exhausted(): 13:34:55 reason = error or ResponseError(cause) 13:34:55 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 13:34:55 13:34:55 During handling of the above exception, another exception occurred: 13:34:55 13:34:55 self = 13:34:55 13:34:55 def test_09_xpdr_portmapping_info(self): 13:34:55 > response = test_utils.get_portmapping_node_attr("XPDRA01", "node-info", None) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 13:34:55 transportpce_tests/1.2.1/test01_portmapping.py:110: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 13:34:55 response = get_request(target_url) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 transportpce_tests/common/test_utils.py:117: in get_request 13:34:55 return requests.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 13:34:55 return session.request(method=method, url=url, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 13:34:55 resp = self.send(prep, **send_kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 13:34:55 r = adapter.send(request, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 except (ProtocolError, OSError) as err: 13:34:55 raise ConnectionError(err, request=request) 13:34:55 13:34:55 except MaxRetryError as e: 13:34:55 if isinstance(e.reason, ConnectTimeoutError): 13:34:55 # TODO: Remove this in 3.0.0: see #2811 13:34:55 if not isinstance(e.reason, NewConnectionError): 13:34:55 raise ConnectTimeout(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, ResponseError): 13:34:55 raise RetryError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _ProxyError): 13:34:55 raise ProxyError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _SSLError): 13:34:55 # This branch is for urllib3 v1.22 and later. 13:34:55 raise SSLError(e, request=request) 13:34:55 13:34:55 > raise ConnectionError(e, request=request) 13:34:55 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 13:34:55 ----------------------------- Captured stdout call ----------------------------- 13:34:55 execution of test_09_xpdr_portmapping_info 13:34:55 ________ TestTransportPCEPortmapping.test_10_xpdr_portmapping_NETWORK1 _________ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 > sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 13:34:55 raise err 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 address = ('localhost', 8191), timeout = 30, source_address = None 13:34:55 socket_options = [(6, 1, 1)] 13:34:55 13:34:55 def create_connection( 13:34:55 address: tuple[str, int], 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 source_address: tuple[str, int] | None = None, 13:34:55 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 13:34:55 ) -> socket.socket: 13:34:55 """Connect to *address* and return the socket object. 13:34:55 13:34:55 Convenience function. Connect to *address* (a 2-tuple ``(host, 13:34:55 port)``) and return the socket object. Passing the optional 13:34:55 *timeout* parameter will set the timeout on the socket instance 13:34:55 before attempting to connect. If no *timeout* is supplied, the 13:34:55 global default timeout setting returned by :func:`socket.getdefaulttimeout` 13:34:55 is used. If *source_address* is set it must be a tuple of (host, port) 13:34:55 for the socket to bind as a source address before making the connection. 13:34:55 An host of '' or port 0 tells the OS to use the default. 13:34:55 """ 13:34:55 13:34:55 host, port = address 13:34:55 if host.startswith("["): 13:34:55 host = host.strip("[]") 13:34:55 err = None 13:34:55 13:34:55 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 13:34:55 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 13:34:55 # The original create_connection function always returns all records. 13:34:55 family = allowed_gai_family() 13:34:55 13:34:55 try: 13:34:55 host.encode("idna") 13:34:55 except UnicodeError: 13:34:55 raise LocationParseError(f"'{host}', label empty or too long") from None 13:34:55 13:34:55 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 13:34:55 af, socktype, proto, canonname, sa = res 13:34:55 sock = None 13:34:55 try: 13:34:55 sock = socket.socket(af, socktype, proto) 13:34:55 13:34:55 # If provided, set socket level options before connecting. 13:34:55 _set_socket_options(sock, socket_options) 13:34:55 13:34:55 if timeout is not _DEFAULT_TIMEOUT: 13:34:55 sock.settimeout(timeout) 13:34:55 if source_address: 13:34:55 sock.bind(source_address) 13:34:55 > sock.connect(sa) 13:34:55 E ConnectionRefusedError: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK1' 13:34:55 body = None 13:34:55 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 13:34:55 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 redirect = False, assert_same_host = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 13:34:55 release_conn = False, chunked = False, body_pos = None, preload_content = False 13:34:55 decode_content = False, response_kw = {} 13:34:55 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK1', query=None, fragment=None) 13:34:55 destination_scheme = None, conn = None, release_this_conn = True 13:34:55 http_tunnel_required = False, err = None, clean_exit = False 13:34:55 13:34:55 def urlopen( # type: ignore[override] 13:34:55 self, 13:34:55 method: str, 13:34:55 url: str, 13:34:55 body: _TYPE_BODY | None = None, 13:34:55 headers: typing.Mapping[str, str] | None = None, 13:34:55 retries: Retry | bool | int | None = None, 13:34:55 redirect: bool = True, 13:34:55 assert_same_host: bool = True, 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 pool_timeout: int | None = None, 13:34:55 release_conn: bool | None = None, 13:34:55 chunked: bool = False, 13:34:55 body_pos: _TYPE_BODY_POSITION | None = None, 13:34:55 preload_content: bool = True, 13:34:55 decode_content: bool = True, 13:34:55 **response_kw: typing.Any, 13:34:55 ) -> BaseHTTPResponse: 13:34:55 """ 13:34:55 Get a connection from the pool and perform an HTTP request. This is the 13:34:55 lowest level call for making a request, so you'll need to specify all 13:34:55 the raw details. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 More commonly, it's appropriate to use a convenience method 13:34:55 such as :meth:`request`. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 `release_conn` will only behave as expected if 13:34:55 `preload_content=False` because we want to make 13:34:55 `preload_content=False` the default behaviour someday soon without 13:34:55 breaking backwards compatibility. 13:34:55 13:34:55 :param method: 13:34:55 HTTP request method (such as GET, POST, PUT, etc.) 13:34:55 13:34:55 :param url: 13:34:55 The URL to perform the request on. 13:34:55 13:34:55 :param body: 13:34:55 Data to send in the request body, either :class:`str`, :class:`bytes`, 13:34:55 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 13:34:55 13:34:55 :param headers: 13:34:55 Dictionary of custom headers to send, such as User-Agent, 13:34:55 If-None-Match, etc. If None, pool headers are used. If provided, 13:34:55 these headers completely replace any pool-specific headers. 13:34:55 13:34:55 :param retries: 13:34:55 Configure the number of retries to allow before raising a 13:34:55 :class:`~urllib3.exceptions.MaxRetryError` exception. 13:34:55 13:34:55 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 13:34:55 :class:`~urllib3.util.retry.Retry` object for fine-grained control 13:34:55 over different types of retries. 13:34:55 Pass an integer number to retry connection errors that many times, 13:34:55 but no other types of errors. Pass zero to never retry. 13:34:55 13:34:55 If ``False``, then retries are disabled and any exception is raised 13:34:55 immediately. Also, instead of raising a MaxRetryError on redirects, 13:34:55 the redirect response will be returned. 13:34:55 13:34:55 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 13:34:55 13:34:55 :param redirect: 13:34:55 If True, automatically handle redirects (status codes 301, 302, 13:34:55 303, 307, 308). Each redirect counts as a retry. Disabling retries 13:34:55 will disable redirect, too. 13:34:55 13:34:55 :param assert_same_host: 13:34:55 If ``True``, will make sure that the host of the pool requests is 13:34:55 consistent else will raise HostChangedError. When ``False``, you can 13:34:55 use the pool on an HTTP proxy and request foreign hosts. 13:34:55 13:34:55 :param timeout: 13:34:55 If specified, overrides the default timeout for this one 13:34:55 request. It may be a float (in seconds) or an instance of 13:34:55 :class:`urllib3.util.Timeout`. 13:34:55 13:34:55 :param pool_timeout: 13:34:55 If set and the pool is set to block=True, then this method will 13:34:55 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 13:34:55 connection is available within the time period. 13:34:55 13:34:55 :param bool preload_content: 13:34:55 If True, the response's body will be preloaded into memory. 13:34:55 13:34:55 :param bool decode_content: 13:34:55 If True, will attempt to decode the body based on the 13:34:55 'content-encoding' header. 13:34:55 13:34:55 :param release_conn: 13:34:55 If False, then the urlopen call will not release the connection 13:34:55 back into the pool once a response is received (but will release if 13:34:55 you read the entire contents of the response such as when 13:34:55 `preload_content=True`). This is useful if you're not preloading 13:34:55 the response's content immediately. You will need to call 13:34:55 ``r.release_conn()`` on the response ``r`` to return the connection 13:34:55 back into the pool. If None, it takes the value of ``preload_content`` 13:34:55 which defaults to ``True``. 13:34:55 13:34:55 :param bool chunked: 13:34:55 If True, urllib3 will send the body using chunked transfer 13:34:55 encoding. Otherwise, urllib3 will send the body using the standard 13:34:55 content-length form. Defaults to False. 13:34:55 13:34:55 :param int body_pos: 13:34:55 Position to seek to in file-like body in the event of a retry or 13:34:55 redirect. Typically this won't need to be set because urllib3 will 13:34:55 auto-populate the value when needed. 13:34:55 """ 13:34:55 parsed_url = parse_url(url) 13:34:55 destination_scheme = parsed_url.scheme 13:34:55 13:34:55 if headers is None: 13:34:55 headers = self.headers 13:34:55 13:34:55 if not isinstance(retries, Retry): 13:34:55 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 13:34:55 13:34:55 if release_conn is None: 13:34:55 release_conn = preload_content 13:34:55 13:34:55 # Check host 13:34:55 if assert_same_host and not self.is_same_host(url): 13:34:55 raise HostChangedError(self, url, retries) 13:34:55 13:34:55 # Ensure that the URL we're connecting to is properly encoded 13:34:55 if url.startswith("/"): 13:34:55 url = to_str(_encode_target(url)) 13:34:55 else: 13:34:55 url = to_str(parsed_url.url) 13:34:55 13:34:55 conn = None 13:34:55 13:34:55 # Track whether `conn` needs to be released before 13:34:55 # returning/raising/recursing. Update this variable if necessary, and 13:34:55 # leave `release_conn` constant throughout the function. That way, if 13:34:55 # the function recurses, the original value of `release_conn` will be 13:34:55 # passed down into the recursive call, and its value will be respected. 13:34:55 # 13:34:55 # See issue #651 [1] for details. 13:34:55 # 13:34:55 # [1] 13:34:55 release_this_conn = release_conn 13:34:55 13:34:55 http_tunnel_required = connection_requires_http_tunnel( 13:34:55 self.proxy, self.proxy_config, destination_scheme 13:34:55 ) 13:34:55 13:34:55 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 13:34:55 # have to copy the headers dict so we can safely change it without those 13:34:55 # changes being reflected in anyone else's copy. 13:34:55 if not http_tunnel_required: 13:34:55 headers = headers.copy() # type: ignore[attr-defined] 13:34:55 headers.update(self.proxy_headers) # type: ignore[union-attr] 13:34:55 13:34:55 # Must keep the exception bound to a separate variable or else Python 3 13:34:55 # complains about UnboundLocalError. 13:34:55 err = None 13:34:55 13:34:55 # Keep track of whether we cleanly exited the except block. This 13:34:55 # ensures we do proper cleanup in finally. 13:34:55 clean_exit = False 13:34:55 13:34:55 # Rewind body position, if needed. Record current position 13:34:55 # for future rewinds in the event of a redirect/retry. 13:34:55 body_pos = set_file_position(body, body_pos) 13:34:55 13:34:55 try: 13:34:55 # Request a connection from the queue. 13:34:55 timeout_obj = self._get_timeout(timeout) 13:34:55 conn = self._get_conn(timeout=pool_timeout) 13:34:55 13:34:55 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 13:34:55 13:34:55 # Is this a closed/new connection that requires CONNECT tunnelling? 13:34:55 if self.proxy is not None and http_tunnel_required and conn.is_closed: 13:34:55 try: 13:34:55 self._prepare_proxy(conn) 13:34:55 except (BaseSSLError, OSError, SocketTimeout) as e: 13:34:55 self._raise_timeout( 13:34:55 err=e, url=self.proxy.url, timeout_value=conn.timeout 13:34:55 ) 13:34:55 raise 13:34:55 13:34:55 # If we're going to release the connection in ``finally:``, then 13:34:55 # the response doesn't need to know about the connection. Otherwise 13:34:55 # it will also try to release it and we'll have a double-release 13:34:55 # mess. 13:34:55 response_conn = conn if not release_conn else None 13:34:55 13:34:55 # Make the request on the HTTPConnection object 13:34:55 > response = self._make_request( 13:34:55 conn, 13:34:55 method, 13:34:55 url, 13:34:55 timeout=timeout_obj, 13:34:55 body=body, 13:34:55 headers=headers, 13:34:55 chunked=chunked, 13:34:55 retries=retries, 13:34:55 response_conn=response_conn, 13:34:55 preload_content=preload_content, 13:34:55 decode_content=decode_content, 13:34:55 **response_kw, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 13:34:55 conn.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 13:34:55 self.endheaders() 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 13:34:55 self._send_output(message_body, encode_chunked=encode_chunked) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 13:34:55 self.send(msg) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 13:34:55 self.connect() 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 13:34:55 self.sock = self._new_conn() 13:34:55 ^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 except socket.gaierror as e: 13:34:55 raise NameResolutionError(self.host, self, e) from e 13:34:55 except SocketTimeout as e: 13:34:55 raise ConnectTimeoutError( 13:34:55 self, 13:34:55 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 13:34:55 ) from e 13:34:55 13:34:55 except OSError as e: 13:34:55 > raise NewConnectionError( 13:34:55 self, f"Failed to establish a new connection: {e}" 13:34:55 ) from e 13:34:55 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 > resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 13:34:55 retries = retries.increment( 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK1' 13:34:55 response = None 13:34:55 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 13:34:55 _pool = 13:34:55 _stacktrace = 13:34:55 13:34:55 def increment( 13:34:55 self, 13:34:55 method: str | None = None, 13:34:55 url: str | None = None, 13:34:55 response: BaseHTTPResponse | None = None, 13:34:55 error: Exception | None = None, 13:34:55 _pool: ConnectionPool | None = None, 13:34:55 _stacktrace: TracebackType | None = None, 13:34:55 ) -> Self: 13:34:55 """Return a new Retry object with incremented retry counters. 13:34:55 13:34:55 :param response: A response object, or None, if the server did not 13:34:55 return a response. 13:34:55 :type response: :class:`~urllib3.response.BaseHTTPResponse` 13:34:55 :param Exception error: An error encountered during the request, or 13:34:55 None if the response was received successfully. 13:34:55 13:34:55 :return: A new ``Retry`` object. 13:34:55 """ 13:34:55 if self.total is False and error: 13:34:55 # Disabled, indicate to re-raise the error. 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 13:34:55 total = self.total 13:34:55 if total is not None: 13:34:55 total -= 1 13:34:55 13:34:55 connect = self.connect 13:34:55 read = self.read 13:34:55 redirect = self.redirect 13:34:55 status_count = self.status 13:34:55 other = self.other 13:34:55 cause = "unknown" 13:34:55 status = None 13:34:55 redirect_location = None 13:34:55 13:34:55 if error and self._is_connection_error(error): 13:34:55 # Connect retry? 13:34:55 if connect is False: 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif connect is not None: 13:34:55 connect -= 1 13:34:55 13:34:55 elif error and self._is_read_error(error): 13:34:55 # Read retry? 13:34:55 if read is False or method is None or not self._is_method_retryable(method): 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif read is not None: 13:34:55 read -= 1 13:34:55 13:34:55 elif error: 13:34:55 # Other retry? 13:34:55 if other is not None: 13:34:55 other -= 1 13:34:55 13:34:55 elif response and response.get_redirect_location(): 13:34:55 # Redirect retry? 13:34:55 if redirect is not None: 13:34:55 redirect -= 1 13:34:55 cause = "too many redirects" 13:34:55 response_redirect_location = response.get_redirect_location() 13:34:55 if response_redirect_location: 13:34:55 redirect_location = response_redirect_location 13:34:55 status = response.status 13:34:55 13:34:55 else: 13:34:55 # Incrementing because of a server error like a 500 in 13:34:55 # status_forcelist and the given method is in the allowed_methods 13:34:55 cause = ResponseError.GENERIC_ERROR 13:34:55 if response and response.status: 13:34:55 if status_count is not None: 13:34:55 status_count -= 1 13:34:55 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 13:34:55 status = response.status 13:34:55 13:34:55 history = self.history + ( 13:34:55 RequestHistory(method, url, error, status, redirect_location), 13:34:55 ) 13:34:55 13:34:55 new_retry = self.new( 13:34:55 total=total, 13:34:55 connect=connect, 13:34:55 read=read, 13:34:55 redirect=redirect, 13:34:55 status=status_count, 13:34:55 other=other, 13:34:55 history=history, 13:34:55 ) 13:34:55 13:34:55 if new_retry.is_exhausted(): 13:34:55 reason = error or ResponseError(cause) 13:34:55 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK1 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 13:34:55 13:34:55 During handling of the above exception, another exception occurred: 13:34:55 13:34:55 self = 13:34:55 13:34:55 def test_10_xpdr_portmapping_NETWORK1(self): 13:34:55 > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-NETWORK1") 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 13:34:55 transportpce_tests/1.2.1/test01_portmapping.py:123: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 13:34:55 response = get_request(target_url) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 transportpce_tests/common/test_utils.py:117: in get_request 13:34:55 return requests.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 13:34:55 return session.request(method=method, url=url, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 13:34:55 resp = self.send(prep, **send_kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 13:34:55 r = adapter.send(request, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 except (ProtocolError, OSError) as err: 13:34:55 raise ConnectionError(err, request=request) 13:34:55 13:34:55 except MaxRetryError as e: 13:34:55 if isinstance(e.reason, ConnectTimeoutError): 13:34:55 # TODO: Remove this in 3.0.0: see #2811 13:34:55 if not isinstance(e.reason, NewConnectionError): 13:34:55 raise ConnectTimeout(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, ResponseError): 13:34:55 raise RetryError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _ProxyError): 13:34:55 raise ProxyError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _SSLError): 13:34:55 # This branch is for urllib3 v1.22 and later. 13:34:55 raise SSLError(e, request=request) 13:34:55 13:34:55 > raise ConnectionError(e, request=request) 13:34:55 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK1 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 13:34:55 ----------------------------- Captured stdout call ----------------------------- 13:34:55 execution of test_10_xpdr_portmapping_NETWORK1 13:34:55 ________ TestTransportPCEPortmapping.test_11_xpdr_portmapping_NETWORK2 _________ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 > sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 13:34:55 raise err 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 address = ('localhost', 8191), timeout = 30, source_address = None 13:34:55 socket_options = [(6, 1, 1)] 13:34:55 13:34:55 def create_connection( 13:34:55 address: tuple[str, int], 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 source_address: tuple[str, int] | None = None, 13:34:55 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 13:34:55 ) -> socket.socket: 13:34:55 """Connect to *address* and return the socket object. 13:34:55 13:34:55 Convenience function. Connect to *address* (a 2-tuple ``(host, 13:34:55 port)``) and return the socket object. Passing the optional 13:34:55 *timeout* parameter will set the timeout on the socket instance 13:34:55 before attempting to connect. If no *timeout* is supplied, the 13:34:55 global default timeout setting returned by :func:`socket.getdefaulttimeout` 13:34:55 is used. If *source_address* is set it must be a tuple of (host, port) 13:34:55 for the socket to bind as a source address before making the connection. 13:34:55 An host of '' or port 0 tells the OS to use the default. 13:34:55 """ 13:34:55 13:34:55 host, port = address 13:34:55 if host.startswith("["): 13:34:55 host = host.strip("[]") 13:34:55 err = None 13:34:55 13:34:55 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 13:34:55 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 13:34:55 # The original create_connection function always returns all records. 13:34:55 family = allowed_gai_family() 13:34:55 13:34:55 try: 13:34:55 host.encode("idna") 13:34:55 except UnicodeError: 13:34:55 raise LocationParseError(f"'{host}', label empty or too long") from None 13:34:55 13:34:55 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 13:34:55 af, socktype, proto, canonname, sa = res 13:34:55 sock = None 13:34:55 try: 13:34:55 sock = socket.socket(af, socktype, proto) 13:34:55 13:34:55 # If provided, set socket level options before connecting. 13:34:55 _set_socket_options(sock, socket_options) 13:34:55 13:34:55 if timeout is not _DEFAULT_TIMEOUT: 13:34:55 sock.settimeout(timeout) 13:34:55 if source_address: 13:34:55 sock.bind(source_address) 13:34:55 > sock.connect(sa) 13:34:55 E ConnectionRefusedError: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2' 13:34:55 body = None 13:34:55 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 13:34:55 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 redirect = False, assert_same_host = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 13:34:55 release_conn = False, chunked = False, body_pos = None, preload_content = False 13:34:55 decode_content = False, response_kw = {} 13:34:55 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2', query=None, fragment=None) 13:34:55 destination_scheme = None, conn = None, release_this_conn = True 13:34:55 http_tunnel_required = False, err = None, clean_exit = False 13:34:55 13:34:55 def urlopen( # type: ignore[override] 13:34:55 self, 13:34:55 method: str, 13:34:55 url: str, 13:34:55 body: _TYPE_BODY | None = None, 13:34:55 headers: typing.Mapping[str, str] | None = None, 13:34:55 retries: Retry | bool | int | None = None, 13:34:55 redirect: bool = True, 13:34:55 assert_same_host: bool = True, 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 pool_timeout: int | None = None, 13:34:55 release_conn: bool | None = None, 13:34:55 chunked: bool = False, 13:34:55 body_pos: _TYPE_BODY_POSITION | None = None, 13:34:55 preload_content: bool = True, 13:34:55 decode_content: bool = True, 13:34:55 **response_kw: typing.Any, 13:34:55 ) -> BaseHTTPResponse: 13:34:55 """ 13:34:55 Get a connection from the pool and perform an HTTP request. This is the 13:34:55 lowest level call for making a request, so you'll need to specify all 13:34:55 the raw details. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 More commonly, it's appropriate to use a convenience method 13:34:55 such as :meth:`request`. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 `release_conn` will only behave as expected if 13:34:55 `preload_content=False` because we want to make 13:34:55 `preload_content=False` the default behaviour someday soon without 13:34:55 breaking backwards compatibility. 13:34:55 13:34:55 :param method: 13:34:55 HTTP request method (such as GET, POST, PUT, etc.) 13:34:55 13:34:55 :param url: 13:34:55 The URL to perform the request on. 13:34:55 13:34:55 :param body: 13:34:55 Data to send in the request body, either :class:`str`, :class:`bytes`, 13:34:55 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 13:34:55 13:34:55 :param headers: 13:34:55 Dictionary of custom headers to send, such as User-Agent, 13:34:55 If-None-Match, etc. If None, pool headers are used. If provided, 13:34:55 these headers completely replace any pool-specific headers. 13:34:55 13:34:55 :param retries: 13:34:55 Configure the number of retries to allow before raising a 13:34:55 :class:`~urllib3.exceptions.MaxRetryError` exception. 13:34:55 13:34:55 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 13:34:55 :class:`~urllib3.util.retry.Retry` object for fine-grained control 13:34:55 over different types of retries. 13:34:55 Pass an integer number to retry connection errors that many times, 13:34:55 but no other types of errors. Pass zero to never retry. 13:34:55 13:34:55 If ``False``, then retries are disabled and any exception is raised 13:34:55 immediately. Also, instead of raising a MaxRetryError on redirects, 13:34:55 the redirect response will be returned. 13:34:55 13:34:55 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 13:34:55 13:34:55 :param redirect: 13:34:55 If True, automatically handle redirects (status codes 301, 302, 13:34:55 303, 307, 308). Each redirect counts as a retry. Disabling retries 13:34:55 will disable redirect, too. 13:34:55 13:34:55 :param assert_same_host: 13:34:55 If ``True``, will make sure that the host of the pool requests is 13:34:55 consistent else will raise HostChangedError. When ``False``, you can 13:34:55 use the pool on an HTTP proxy and request foreign hosts. 13:34:55 13:34:55 :param timeout: 13:34:55 If specified, overrides the default timeout for this one 13:34:55 request. It may be a float (in seconds) or an instance of 13:34:55 :class:`urllib3.util.Timeout`. 13:34:55 13:34:55 :param pool_timeout: 13:34:55 If set and the pool is set to block=True, then this method will 13:34:55 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 13:34:55 connection is available within the time period. 13:34:55 13:34:55 :param bool preload_content: 13:34:55 If True, the response's body will be preloaded into memory. 13:34:55 13:34:55 :param bool decode_content: 13:34:55 If True, will attempt to decode the body based on the 13:34:55 'content-encoding' header. 13:34:55 13:34:55 :param release_conn: 13:34:55 If False, then the urlopen call will not release the connection 13:34:55 back into the pool once a response is received (but will release if 13:34:55 you read the entire contents of the response such as when 13:34:55 `preload_content=True`). This is useful if you're not preloading 13:34:55 the response's content immediately. You will need to call 13:34:55 ``r.release_conn()`` on the response ``r`` to return the connection 13:34:55 back into the pool. If None, it takes the value of ``preload_content`` 13:34:55 which defaults to ``True``. 13:34:55 13:34:55 :param bool chunked: 13:34:55 If True, urllib3 will send the body using chunked transfer 13:34:55 encoding. Otherwise, urllib3 will send the body using the standard 13:34:55 content-length form. Defaults to False. 13:34:55 13:34:55 :param int body_pos: 13:34:55 Position to seek to in file-like body in the event of a retry or 13:34:55 redirect. Typically this won't need to be set because urllib3 will 13:34:55 auto-populate the value when needed. 13:34:55 """ 13:34:55 parsed_url = parse_url(url) 13:34:55 destination_scheme = parsed_url.scheme 13:34:55 13:34:55 if headers is None: 13:34:55 headers = self.headers 13:34:55 13:34:55 if not isinstance(retries, Retry): 13:34:55 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 13:34:55 13:34:55 if release_conn is None: 13:34:55 release_conn = preload_content 13:34:55 13:34:55 # Check host 13:34:55 if assert_same_host and not self.is_same_host(url): 13:34:55 raise HostChangedError(self, url, retries) 13:34:55 13:34:55 # Ensure that the URL we're connecting to is properly encoded 13:34:55 if url.startswith("/"): 13:34:55 url = to_str(_encode_target(url)) 13:34:55 else: 13:34:55 url = to_str(parsed_url.url) 13:34:55 13:34:55 conn = None 13:34:55 13:34:55 # Track whether `conn` needs to be released before 13:34:55 # returning/raising/recursing. Update this variable if necessary, and 13:34:55 # leave `release_conn` constant throughout the function. That way, if 13:34:55 # the function recurses, the original value of `release_conn` will be 13:34:55 # passed down into the recursive call, and its value will be respected. 13:34:55 # 13:34:55 # See issue #651 [1] for details. 13:34:55 # 13:34:55 # [1] 13:34:55 release_this_conn = release_conn 13:34:55 13:34:55 http_tunnel_required = connection_requires_http_tunnel( 13:34:55 self.proxy, self.proxy_config, destination_scheme 13:34:55 ) 13:34:55 13:34:55 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 13:34:55 # have to copy the headers dict so we can safely change it without those 13:34:55 # changes being reflected in anyone else's copy. 13:34:55 if not http_tunnel_required: 13:34:55 headers = headers.copy() # type: ignore[attr-defined] 13:34:55 headers.update(self.proxy_headers) # type: ignore[union-attr] 13:34:55 13:34:55 # Must keep the exception bound to a separate variable or else Python 3 13:34:55 # complains about UnboundLocalError. 13:34:55 err = None 13:34:55 13:34:55 # Keep track of whether we cleanly exited the except block. This 13:34:55 # ensures we do proper cleanup in finally. 13:34:55 clean_exit = False 13:34:55 13:34:55 # Rewind body position, if needed. Record current position 13:34:55 # for future rewinds in the event of a redirect/retry. 13:34:55 body_pos = set_file_position(body, body_pos) 13:34:55 13:34:55 try: 13:34:55 # Request a connection from the queue. 13:34:55 timeout_obj = self._get_timeout(timeout) 13:34:55 conn = self._get_conn(timeout=pool_timeout) 13:34:55 13:34:55 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 13:34:55 13:34:55 # Is this a closed/new connection that requires CONNECT tunnelling? 13:34:55 if self.proxy is not None and http_tunnel_required and conn.is_closed: 13:34:55 try: 13:34:55 self._prepare_proxy(conn) 13:34:55 except (BaseSSLError, OSError, SocketTimeout) as e: 13:34:55 self._raise_timeout( 13:34:55 err=e, url=self.proxy.url, timeout_value=conn.timeout 13:34:55 ) 13:34:55 raise 13:34:55 13:34:55 # If we're going to release the connection in ``finally:``, then 13:34:55 # the response doesn't need to know about the connection. Otherwise 13:34:55 # it will also try to release it and we'll have a double-release 13:34:55 # mess. 13:34:55 response_conn = conn if not release_conn else None 13:34:55 13:34:55 # Make the request on the HTTPConnection object 13:34:55 > response = self._make_request( 13:34:55 conn, 13:34:55 method, 13:34:55 url, 13:34:55 timeout=timeout_obj, 13:34:55 body=body, 13:34:55 headers=headers, 13:34:55 chunked=chunked, 13:34:55 retries=retries, 13:34:55 response_conn=response_conn, 13:34:55 preload_content=preload_content, 13:34:55 decode_content=decode_content, 13:34:55 **response_kw, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 13:34:55 conn.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 13:34:55 self.endheaders() 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 13:34:55 self._send_output(message_body, encode_chunked=encode_chunked) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 13:34:55 self.send(msg) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 13:34:55 self.connect() 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 13:34:55 self.sock = self._new_conn() 13:34:55 ^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 except socket.gaierror as e: 13:34:55 raise NameResolutionError(self.host, self, e) from e 13:34:55 except SocketTimeout as e: 13:34:55 raise ConnectTimeoutError( 13:34:55 self, 13:34:55 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 13:34:55 ) from e 13:34:55 13:34:55 except OSError as e: 13:34:55 > raise NewConnectionError( 13:34:55 self, f"Failed to establish a new connection: {e}" 13:34:55 ) from e 13:34:55 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 > resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 13:34:55 retries = retries.increment( 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2' 13:34:55 response = None 13:34:55 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 13:34:55 _pool = 13:34:55 _stacktrace = 13:34:55 13:34:55 def increment( 13:34:55 self, 13:34:55 method: str | None = None, 13:34:55 url: str | None = None, 13:34:55 response: BaseHTTPResponse | None = None, 13:34:55 error: Exception | None = None, 13:34:55 _pool: ConnectionPool | None = None, 13:34:55 _stacktrace: TracebackType | None = None, 13:34:55 ) -> Self: 13:34:55 """Return a new Retry object with incremented retry counters. 13:34:55 13:34:55 :param response: A response object, or None, if the server did not 13:34:55 return a response. 13:34:55 :type response: :class:`~urllib3.response.BaseHTTPResponse` 13:34:55 :param Exception error: An error encountered during the request, or 13:34:55 None if the response was received successfully. 13:34:55 13:34:55 :return: A new ``Retry`` object. 13:34:55 """ 13:34:55 if self.total is False and error: 13:34:55 # Disabled, indicate to re-raise the error. 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 13:34:55 total = self.total 13:34:55 if total is not None: 13:34:55 total -= 1 13:34:55 13:34:55 connect = self.connect 13:34:55 read = self.read 13:34:55 redirect = self.redirect 13:34:55 status_count = self.status 13:34:55 other = self.other 13:34:55 cause = "unknown" 13:34:55 status = None 13:34:55 redirect_location = None 13:34:55 13:34:55 if error and self._is_connection_error(error): 13:34:55 # Connect retry? 13:34:55 if connect is False: 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif connect is not None: 13:34:55 connect -= 1 13:34:55 13:34:55 elif error and self._is_read_error(error): 13:34:55 # Read retry? 13:34:55 if read is False or method is None or not self._is_method_retryable(method): 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif read is not None: 13:34:55 read -= 1 13:34:55 13:34:55 elif error: 13:34:55 # Other retry? 13:34:55 if other is not None: 13:34:55 other -= 1 13:34:55 13:34:55 elif response and response.get_redirect_location(): 13:34:55 # Redirect retry? 13:34:55 if redirect is not None: 13:34:55 redirect -= 1 13:34:55 cause = "too many redirects" 13:34:55 response_redirect_location = response.get_redirect_location() 13:34:55 if response_redirect_location: 13:34:55 redirect_location = response_redirect_location 13:34:55 status = response.status 13:34:55 13:34:55 else: 13:34:55 # Incrementing because of a server error like a 500 in 13:34:55 # status_forcelist and the given method is in the allowed_methods 13:34:55 cause = ResponseError.GENERIC_ERROR 13:34:55 if response and response.status: 13:34:55 if status_count is not None: 13:34:55 status_count -= 1 13:34:55 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 13:34:55 status = response.status 13:34:55 13:34:55 history = self.history + ( 13:34:55 RequestHistory(method, url, error, status, redirect_location), 13:34:55 ) 13:34:55 13:34:55 new_retry = self.new( 13:34:55 total=total, 13:34:55 connect=connect, 13:34:55 read=read, 13:34:55 redirect=redirect, 13:34:55 status=status_count, 13:34:55 other=other, 13:34:55 history=history, 13:34:55 ) 13:34:55 13:34:55 if new_retry.is_exhausted(): 13:34:55 reason = error or ResponseError(cause) 13:34:55 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 13:34:55 13:34:55 During handling of the above exception, another exception occurred: 13:34:55 13:34:55 self = 13:34:55 13:34:55 def test_11_xpdr_portmapping_NETWORK2(self): 13:34:55 > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-NETWORK2") 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 13:34:55 transportpce_tests/1.2.1/test01_portmapping.py:135: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 13:34:55 response = get_request(target_url) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 transportpce_tests/common/test_utils.py:117: in get_request 13:34:55 return requests.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 13:34:55 return session.request(method=method, url=url, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 13:34:55 resp = self.send(prep, **send_kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 13:34:55 r = adapter.send(request, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 except (ProtocolError, OSError) as err: 13:34:55 raise ConnectionError(err, request=request) 13:34:55 13:34:55 except MaxRetryError as e: 13:34:55 if isinstance(e.reason, ConnectTimeoutError): 13:34:55 # TODO: Remove this in 3.0.0: see #2811 13:34:55 if not isinstance(e.reason, NewConnectionError): 13:34:55 raise ConnectTimeout(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, ResponseError): 13:34:55 raise RetryError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _ProxyError): 13:34:55 raise ProxyError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _SSLError): 13:34:55 # This branch is for urllib3 v1.22 and later. 13:34:55 raise SSLError(e, request=request) 13:34:55 13:34:55 > raise ConnectionError(e, request=request) 13:34:55 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 13:34:55 ----------------------------- Captured stdout call ----------------------------- 13:34:55 execution of test_11_xpdr_portmapping_NETWORK2 13:34:55 _________ TestTransportPCEPortmapping.test_12_xpdr_portmapping_CLIENT1 _________ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 > sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 13:34:55 raise err 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 address = ('localhost', 8191), timeout = 30, source_address = None 13:34:55 socket_options = [(6, 1, 1)] 13:34:55 13:34:55 def create_connection( 13:34:55 address: tuple[str, int], 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 source_address: tuple[str, int] | None = None, 13:34:55 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 13:34:55 ) -> socket.socket: 13:34:55 """Connect to *address* and return the socket object. 13:34:55 13:34:55 Convenience function. Connect to *address* (a 2-tuple ``(host, 13:34:55 port)``) and return the socket object. Passing the optional 13:34:55 *timeout* parameter will set the timeout on the socket instance 13:34:55 before attempting to connect. If no *timeout* is supplied, the 13:34:55 global default timeout setting returned by :func:`socket.getdefaulttimeout` 13:34:55 is used. If *source_address* is set it must be a tuple of (host, port) 13:34:55 for the socket to bind as a source address before making the connection. 13:34:55 An host of '' or port 0 tells the OS to use the default. 13:34:55 """ 13:34:55 13:34:55 host, port = address 13:34:55 if host.startswith("["): 13:34:55 host = host.strip("[]") 13:34:55 err = None 13:34:55 13:34:55 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 13:34:55 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 13:34:55 # The original create_connection function always returns all records. 13:34:55 family = allowed_gai_family() 13:34:55 13:34:55 try: 13:34:55 host.encode("idna") 13:34:55 except UnicodeError: 13:34:55 raise LocationParseError(f"'{host}', label empty or too long") from None 13:34:55 13:34:55 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 13:34:55 af, socktype, proto, canonname, sa = res 13:34:55 sock = None 13:34:55 try: 13:34:55 sock = socket.socket(af, socktype, proto) 13:34:55 13:34:55 # If provided, set socket level options before connecting. 13:34:55 _set_socket_options(sock, socket_options) 13:34:55 13:34:55 if timeout is not _DEFAULT_TIMEOUT: 13:34:55 sock.settimeout(timeout) 13:34:55 if source_address: 13:34:55 sock.bind(source_address) 13:34:55 > sock.connect(sa) 13:34:55 E ConnectionRefusedError: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1' 13:34:55 body = None 13:34:55 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 13:34:55 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 redirect = False, assert_same_host = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 13:34:55 release_conn = False, chunked = False, body_pos = None, preload_content = False 13:34:55 decode_content = False, response_kw = {} 13:34:55 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1', query=None, fragment=None) 13:34:55 destination_scheme = None, conn = None, release_this_conn = True 13:34:55 http_tunnel_required = False, err = None, clean_exit = False 13:34:55 13:34:55 def urlopen( # type: ignore[override] 13:34:55 self, 13:34:55 method: str, 13:34:55 url: str, 13:34:55 body: _TYPE_BODY | None = None, 13:34:55 headers: typing.Mapping[str, str] | None = None, 13:34:55 retries: Retry | bool | int | None = None, 13:34:55 redirect: bool = True, 13:34:55 assert_same_host: bool = True, 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 pool_timeout: int | None = None, 13:34:55 release_conn: bool | None = None, 13:34:55 chunked: bool = False, 13:34:55 body_pos: _TYPE_BODY_POSITION | None = None, 13:34:55 preload_content: bool = True, 13:34:55 decode_content: bool = True, 13:34:55 **response_kw: typing.Any, 13:34:55 ) -> BaseHTTPResponse: 13:34:55 """ 13:34:55 Get a connection from the pool and perform an HTTP request. This is the 13:34:55 lowest level call for making a request, so you'll need to specify all 13:34:55 the raw details. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 More commonly, it's appropriate to use a convenience method 13:34:55 such as :meth:`request`. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 `release_conn` will only behave as expected if 13:34:55 `preload_content=False` because we want to make 13:34:55 `preload_content=False` the default behaviour someday soon without 13:34:55 breaking backwards compatibility. 13:34:55 13:34:55 :param method: 13:34:55 HTTP request method (such as GET, POST, PUT, etc.) 13:34:55 13:34:55 :param url: 13:34:55 The URL to perform the request on. 13:34:55 13:34:55 :param body: 13:34:55 Data to send in the request body, either :class:`str`, :class:`bytes`, 13:34:55 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 13:34:55 13:34:55 :param headers: 13:34:55 Dictionary of custom headers to send, such as User-Agent, 13:34:55 If-None-Match, etc. If None, pool headers are used. If provided, 13:34:55 these headers completely replace any pool-specific headers. 13:34:55 13:34:55 :param retries: 13:34:55 Configure the number of retries to allow before raising a 13:34:55 :class:`~urllib3.exceptions.MaxRetryError` exception. 13:34:55 13:34:55 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 13:34:55 :class:`~urllib3.util.retry.Retry` object for fine-grained control 13:34:55 over different types of retries. 13:34:55 Pass an integer number to retry connection errors that many times, 13:34:55 but no other types of errors. Pass zero to never retry. 13:34:55 13:34:55 If ``False``, then retries are disabled and any exception is raised 13:34:55 immediately. Also, instead of raising a MaxRetryError on redirects, 13:34:55 the redirect response will be returned. 13:34:55 13:34:55 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 13:34:55 13:34:55 :param redirect: 13:34:55 If True, automatically handle redirects (status codes 301, 302, 13:34:55 303, 307, 308). Each redirect counts as a retry. Disabling retries 13:34:55 will disable redirect, too. 13:34:55 13:34:55 :param assert_same_host: 13:34:55 If ``True``, will make sure that the host of the pool requests is 13:34:55 consistent else will raise HostChangedError. When ``False``, you can 13:34:55 use the pool on an HTTP proxy and request foreign hosts. 13:34:55 13:34:55 :param timeout: 13:34:55 If specified, overrides the default timeout for this one 13:34:55 request. It may be a float (in seconds) or an instance of 13:34:55 :class:`urllib3.util.Timeout`. 13:34:55 13:34:55 :param pool_timeout: 13:34:55 If set and the pool is set to block=True, then this method will 13:34:55 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 13:34:55 connection is available within the time period. 13:34:55 13:34:55 :param bool preload_content: 13:34:55 If True, the response's body will be preloaded into memory. 13:34:55 13:34:55 :param bool decode_content: 13:34:55 If True, will attempt to decode the body based on the 13:34:55 'content-encoding' header. 13:34:55 13:34:55 :param release_conn: 13:34:55 If False, then the urlopen call will not release the connection 13:34:55 back into the pool once a response is received (but will release if 13:34:55 you read the entire contents of the response such as when 13:34:55 `preload_content=True`). This is useful if you're not preloading 13:34:55 the response's content immediately. You will need to call 13:34:55 ``r.release_conn()`` on the response ``r`` to return the connection 13:34:55 back into the pool. If None, it takes the value of ``preload_content`` 13:34:55 which defaults to ``True``. 13:34:55 13:34:55 :param bool chunked: 13:34:55 If True, urllib3 will send the body using chunked transfer 13:34:55 encoding. Otherwise, urllib3 will send the body using the standard 13:34:55 content-length form. Defaults to False. 13:34:55 13:34:55 :param int body_pos: 13:34:55 Position to seek to in file-like body in the event of a retry or 13:34:55 redirect. Typically this won't need to be set because urllib3 will 13:34:55 auto-populate the value when needed. 13:34:55 """ 13:34:55 parsed_url = parse_url(url) 13:34:55 destination_scheme = parsed_url.scheme 13:34:55 13:34:55 if headers is None: 13:34:55 headers = self.headers 13:34:55 13:34:55 if not isinstance(retries, Retry): 13:34:55 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 13:34:55 13:34:55 if release_conn is None: 13:34:55 release_conn = preload_content 13:34:55 13:34:55 # Check host 13:34:55 if assert_same_host and not self.is_same_host(url): 13:34:55 raise HostChangedError(self, url, retries) 13:34:55 13:34:55 # Ensure that the URL we're connecting to is properly encoded 13:34:55 if url.startswith("/"): 13:34:55 url = to_str(_encode_target(url)) 13:34:55 else: 13:34:55 url = to_str(parsed_url.url) 13:34:55 13:34:55 conn = None 13:34:55 13:34:55 # Track whether `conn` needs to be released before 13:34:55 # returning/raising/recursing. Update this variable if necessary, and 13:34:55 # leave `release_conn` constant throughout the function. That way, if 13:34:55 # the function recurses, the original value of `release_conn` will be 13:34:55 # passed down into the recursive call, and its value will be respected. 13:34:55 # 13:34:55 # See issue #651 [1] for details. 13:34:55 # 13:34:55 # [1] 13:34:55 release_this_conn = release_conn 13:34:55 13:34:55 http_tunnel_required = connection_requires_http_tunnel( 13:34:55 self.proxy, self.proxy_config, destination_scheme 13:34:55 ) 13:34:55 13:34:55 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 13:34:55 # have to copy the headers dict so we can safely change it without those 13:34:55 # changes being reflected in anyone else's copy. 13:34:55 if not http_tunnel_required: 13:34:55 headers = headers.copy() # type: ignore[attr-defined] 13:34:55 headers.update(self.proxy_headers) # type: ignore[union-attr] 13:34:55 13:34:55 # Must keep the exception bound to a separate variable or else Python 3 13:34:55 # complains about UnboundLocalError. 13:34:55 err = None 13:34:55 13:34:55 # Keep track of whether we cleanly exited the except block. This 13:34:55 # ensures we do proper cleanup in finally. 13:34:55 clean_exit = False 13:34:55 13:34:55 # Rewind body position, if needed. Record current position 13:34:55 # for future rewinds in the event of a redirect/retry. 13:34:55 body_pos = set_file_position(body, body_pos) 13:34:55 13:34:55 try: 13:34:55 # Request a connection from the queue. 13:34:55 timeout_obj = self._get_timeout(timeout) 13:34:55 conn = self._get_conn(timeout=pool_timeout) 13:34:55 13:34:55 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 13:34:55 13:34:55 # Is this a closed/new connection that requires CONNECT tunnelling? 13:34:55 if self.proxy is not None and http_tunnel_required and conn.is_closed: 13:34:55 try: 13:34:55 self._prepare_proxy(conn) 13:34:55 except (BaseSSLError, OSError, SocketTimeout) as e: 13:34:55 self._raise_timeout( 13:34:55 err=e, url=self.proxy.url, timeout_value=conn.timeout 13:34:55 ) 13:34:55 raise 13:34:55 13:34:55 # If we're going to release the connection in ``finally:``, then 13:34:55 # the response doesn't need to know about the connection. Otherwise 13:34:55 # it will also try to release it and we'll have a double-release 13:34:55 # mess. 13:34:55 response_conn = conn if not release_conn else None 13:34:55 13:34:55 # Make the request on the HTTPConnection object 13:34:55 > response = self._make_request( 13:34:55 conn, 13:34:55 method, 13:34:55 url, 13:34:55 timeout=timeout_obj, 13:34:55 body=body, 13:34:55 headers=headers, 13:34:55 chunked=chunked, 13:34:55 retries=retries, 13:34:55 response_conn=response_conn, 13:34:55 preload_content=preload_content, 13:34:55 decode_content=decode_content, 13:34:55 **response_kw, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 13:34:55 conn.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 13:34:55 self.endheaders() 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 13:34:55 self._send_output(message_body, encode_chunked=encode_chunked) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 13:34:55 self.send(msg) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 13:34:55 self.connect() 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 13:34:55 self.sock = self._new_conn() 13:34:55 ^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 except socket.gaierror as e: 13:34:55 raise NameResolutionError(self.host, self, e) from e 13:34:55 except SocketTimeout as e: 13:34:55 raise ConnectTimeoutError( 13:34:55 self, 13:34:55 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 13:34:55 ) from e 13:34:55 13:34:55 except OSError as e: 13:34:55 > raise NewConnectionError( 13:34:55 self, f"Failed to establish a new connection: {e}" 13:34:55 ) from e 13:34:55 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 > resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 13:34:55 retries = retries.increment( 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1' 13:34:55 response = None 13:34:55 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 13:34:55 _pool = 13:34:55 _stacktrace = 13:34:55 13:34:55 def increment( 13:34:55 self, 13:34:55 method: str | None = None, 13:34:55 url: str | None = None, 13:34:55 response: BaseHTTPResponse | None = None, 13:34:55 error: Exception | None = None, 13:34:55 _pool: ConnectionPool | None = None, 13:34:55 _stacktrace: TracebackType | None = None, 13:34:55 ) -> Self: 13:34:55 """Return a new Retry object with incremented retry counters. 13:34:55 13:34:55 :param response: A response object, or None, if the server did not 13:34:55 return a response. 13:34:55 :type response: :class:`~urllib3.response.BaseHTTPResponse` 13:34:55 :param Exception error: An error encountered during the request, or 13:34:55 None if the response was received successfully. 13:34:55 13:34:55 :return: A new ``Retry`` object. 13:34:55 """ 13:34:55 if self.total is False and error: 13:34:55 # Disabled, indicate to re-raise the error. 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 13:34:55 total = self.total 13:34:55 if total is not None: 13:34:55 total -= 1 13:34:55 13:34:55 connect = self.connect 13:34:55 read = self.read 13:34:55 redirect = self.redirect 13:34:55 status_count = self.status 13:34:55 other = self.other 13:34:55 cause = "unknown" 13:34:55 status = None 13:34:55 redirect_location = None 13:34:55 13:34:55 if error and self._is_connection_error(error): 13:34:55 # Connect retry? 13:34:55 if connect is False: 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif connect is not None: 13:34:55 connect -= 1 13:34:55 13:34:55 elif error and self._is_read_error(error): 13:34:55 # Read retry? 13:34:55 if read is False or method is None or not self._is_method_retryable(method): 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif read is not None: 13:34:55 read -= 1 13:34:55 13:34:55 elif error: 13:34:55 # Other retry? 13:34:55 if other is not None: 13:34:55 other -= 1 13:34:55 13:34:55 elif response and response.get_redirect_location(): 13:34:55 # Redirect retry? 13:34:55 if redirect is not None: 13:34:55 redirect -= 1 13:34:55 cause = "too many redirects" 13:34:55 response_redirect_location = response.get_redirect_location() 13:34:55 if response_redirect_location: 13:34:55 redirect_location = response_redirect_location 13:34:55 status = response.status 13:34:55 13:34:55 else: 13:34:55 # Incrementing because of a server error like a 500 in 13:34:55 # status_forcelist and the given method is in the allowed_methods 13:34:55 cause = ResponseError.GENERIC_ERROR 13:34:55 if response and response.status: 13:34:55 if status_count is not None: 13:34:55 status_count -= 1 13:34:55 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 13:34:55 status = response.status 13:34:55 13:34:55 history = self.history + ( 13:34:55 RequestHistory(method, url, error, status, redirect_location), 13:34:55 ) 13:34:55 13:34:55 new_retry = self.new( 13:34:55 total=total, 13:34:55 connect=connect, 13:34:55 read=read, 13:34:55 redirect=redirect, 13:34:55 status=status_count, 13:34:55 other=other, 13:34:55 history=history, 13:34:55 ) 13:34:55 13:34:55 if new_retry.is_exhausted(): 13:34:55 reason = error or ResponseError(cause) 13:34:55 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 13:34:55 13:34:55 During handling of the above exception, another exception occurred: 13:34:55 13:34:55 self = 13:34:55 13:34:55 def test_12_xpdr_portmapping_CLIENT1(self): 13:34:55 > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT1") 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 13:34:55 transportpce_tests/1.2.1/test01_portmapping.py:147: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 13:34:55 response = get_request(target_url) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 transportpce_tests/common/test_utils.py:117: in get_request 13:34:55 return requests.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 13:34:55 return session.request(method=method, url=url, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 13:34:55 resp = self.send(prep, **send_kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 13:34:55 r = adapter.send(request, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 except (ProtocolError, OSError) as err: 13:34:55 raise ConnectionError(err, request=request) 13:34:55 13:34:55 except MaxRetryError as e: 13:34:55 if isinstance(e.reason, ConnectTimeoutError): 13:34:55 # TODO: Remove this in 3.0.0: see #2811 13:34:55 if not isinstance(e.reason, NewConnectionError): 13:34:55 raise ConnectTimeout(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, ResponseError): 13:34:55 raise RetryError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _ProxyError): 13:34:55 raise ProxyError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _SSLError): 13:34:55 # This branch is for urllib3 v1.22 and later. 13:34:55 raise SSLError(e, request=request) 13:34:55 13:34:55 > raise ConnectionError(e, request=request) 13:34:55 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 13:34:55 ----------------------------- Captured stdout call ----------------------------- 13:34:55 execution of test_12_xpdr_portmapping_CLIENT1 13:34:55 _________ TestTransportPCEPortmapping.test_13_xpdr_portmapping_CLIENT2 _________ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 > sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 13:34:55 raise err 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 address = ('localhost', 8191), timeout = 30, source_address = None 13:34:55 socket_options = [(6, 1, 1)] 13:34:55 13:34:55 def create_connection( 13:34:55 address: tuple[str, int], 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 source_address: tuple[str, int] | None = None, 13:34:55 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 13:34:55 ) -> socket.socket: 13:34:55 """Connect to *address* and return the socket object. 13:34:55 13:34:55 Convenience function. Connect to *address* (a 2-tuple ``(host, 13:34:55 port)``) and return the socket object. Passing the optional 13:34:55 *timeout* parameter will set the timeout on the socket instance 13:34:55 before attempting to connect. If no *timeout* is supplied, the 13:34:55 global default timeout setting returned by :func:`socket.getdefaulttimeout` 13:34:55 is used. If *source_address* is set it must be a tuple of (host, port) 13:34:55 for the socket to bind as a source address before making the connection. 13:34:55 An host of '' or port 0 tells the OS to use the default. 13:34:55 """ 13:34:55 13:34:55 host, port = address 13:34:55 if host.startswith("["): 13:34:55 host = host.strip("[]") 13:34:55 err = None 13:34:55 13:34:55 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 13:34:55 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 13:34:55 # The original create_connection function always returns all records. 13:34:55 family = allowed_gai_family() 13:34:55 13:34:55 try: 13:34:55 host.encode("idna") 13:34:55 except UnicodeError: 13:34:55 raise LocationParseError(f"'{host}', label empty or too long") from None 13:34:55 13:34:55 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 13:34:55 af, socktype, proto, canonname, sa = res 13:34:55 sock = None 13:34:55 try: 13:34:55 sock = socket.socket(af, socktype, proto) 13:34:55 13:34:55 # If provided, set socket level options before connecting. 13:34:55 _set_socket_options(sock, socket_options) 13:34:55 13:34:55 if timeout is not _DEFAULT_TIMEOUT: 13:34:55 sock.settimeout(timeout) 13:34:55 if source_address: 13:34:55 sock.bind(source_address) 13:34:55 > sock.connect(sa) 13:34:55 E ConnectionRefusedError: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2' 13:34:55 body = None 13:34:55 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 13:34:55 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 redirect = False, assert_same_host = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 13:34:55 release_conn = False, chunked = False, body_pos = None, preload_content = False 13:34:55 decode_content = False, response_kw = {} 13:34:55 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2', query=None, fragment=None) 13:34:55 destination_scheme = None, conn = None, release_this_conn = True 13:34:55 http_tunnel_required = False, err = None, clean_exit = False 13:34:55 13:34:55 def urlopen( # type: ignore[override] 13:34:55 self, 13:34:55 method: str, 13:34:55 url: str, 13:34:55 body: _TYPE_BODY | None = None, 13:34:55 headers: typing.Mapping[str, str] | None = None, 13:34:55 retries: Retry | bool | int | None = None, 13:34:55 redirect: bool = True, 13:34:55 assert_same_host: bool = True, 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 pool_timeout: int | None = None, 13:34:55 release_conn: bool | None = None, 13:34:55 chunked: bool = False, 13:34:55 body_pos: _TYPE_BODY_POSITION | None = None, 13:34:55 preload_content: bool = True, 13:34:55 decode_content: bool = True, 13:34:55 **response_kw: typing.Any, 13:34:55 ) -> BaseHTTPResponse: 13:34:55 """ 13:34:55 Get a connection from the pool and perform an HTTP request. This is the 13:34:55 lowest level call for making a request, so you'll need to specify all 13:34:55 the raw details. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 More commonly, it's appropriate to use a convenience method 13:34:55 such as :meth:`request`. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 `release_conn` will only behave as expected if 13:34:55 `preload_content=False` because we want to make 13:34:55 `preload_content=False` the default behaviour someday soon without 13:34:55 breaking backwards compatibility. 13:34:55 13:34:55 :param method: 13:34:55 HTTP request method (such as GET, POST, PUT, etc.) 13:34:55 13:34:55 :param url: 13:34:55 The URL to perform the request on. 13:34:55 13:34:55 :param body: 13:34:55 Data to send in the request body, either :class:`str`, :class:`bytes`, 13:34:55 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 13:34:55 13:34:55 :param headers: 13:34:55 Dictionary of custom headers to send, such as User-Agent, 13:34:55 If-None-Match, etc. If None, pool headers are used. If provided, 13:34:55 these headers completely replace any pool-specific headers. 13:34:55 13:34:55 :param retries: 13:34:55 Configure the number of retries to allow before raising a 13:34:55 :class:`~urllib3.exceptions.MaxRetryError` exception. 13:34:55 13:34:55 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 13:34:55 :class:`~urllib3.util.retry.Retry` object for fine-grained control 13:34:55 over different types of retries. 13:34:55 Pass an integer number to retry connection errors that many times, 13:34:55 but no other types of errors. Pass zero to never retry. 13:34:55 13:34:55 If ``False``, then retries are disabled and any exception is raised 13:34:55 immediately. Also, instead of raising a MaxRetryError on redirects, 13:34:55 the redirect response will be returned. 13:34:55 13:34:55 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 13:34:55 13:34:55 :param redirect: 13:34:55 If True, automatically handle redirects (status codes 301, 302, 13:34:55 303, 307, 308). Each redirect counts as a retry. Disabling retries 13:34:55 will disable redirect, too. 13:34:55 13:34:55 :param assert_same_host: 13:34:55 If ``True``, will make sure that the host of the pool requests is 13:34:55 consistent else will raise HostChangedError. When ``False``, you can 13:34:55 use the pool on an HTTP proxy and request foreign hosts. 13:34:55 13:34:55 :param timeout: 13:34:55 If specified, overrides the default timeout for this one 13:34:55 request. It may be a float (in seconds) or an instance of 13:34:55 :class:`urllib3.util.Timeout`. 13:34:55 13:34:55 :param pool_timeout: 13:34:55 If set and the pool is set to block=True, then this method will 13:34:55 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 13:34:55 connection is available within the time period. 13:34:55 13:34:55 :param bool preload_content: 13:34:55 If True, the response's body will be preloaded into memory. 13:34:55 13:34:55 :param bool decode_content: 13:34:55 If True, will attempt to decode the body based on the 13:34:55 'content-encoding' header. 13:34:55 13:34:55 :param release_conn: 13:34:55 If False, then the urlopen call will not release the connection 13:34:55 back into the pool once a response is received (but will release if 13:34:55 you read the entire contents of the response such as when 13:34:55 `preload_content=True`). This is useful if you're not preloading 13:34:55 the response's content immediately. You will need to call 13:34:55 ``r.release_conn()`` on the response ``r`` to return the connection 13:34:55 back into the pool. If None, it takes the value of ``preload_content`` 13:34:55 which defaults to ``True``. 13:34:55 13:34:55 :param bool chunked: 13:34:55 If True, urllib3 will send the body using chunked transfer 13:34:55 encoding. Otherwise, urllib3 will send the body using the standard 13:34:55 content-length form. Defaults to False. 13:34:55 13:34:55 :param int body_pos: 13:34:55 Position to seek to in file-like body in the event of a retry or 13:34:55 redirect. Typically this won't need to be set because urllib3 will 13:34:55 auto-populate the value when needed. 13:34:55 """ 13:34:55 parsed_url = parse_url(url) 13:34:55 destination_scheme = parsed_url.scheme 13:34:55 13:34:55 if headers is None: 13:34:55 headers = self.headers 13:34:55 13:34:55 if not isinstance(retries, Retry): 13:34:55 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 13:34:55 13:34:55 if release_conn is None: 13:34:55 release_conn = preload_content 13:34:55 13:34:55 # Check host 13:34:55 if assert_same_host and not self.is_same_host(url): 13:34:55 raise HostChangedError(self, url, retries) 13:34:55 13:34:55 # Ensure that the URL we're connecting to is properly encoded 13:34:55 if url.startswith("/"): 13:34:55 url = to_str(_encode_target(url)) 13:34:55 else: 13:34:55 url = to_str(parsed_url.url) 13:34:55 13:34:55 conn = None 13:34:55 13:34:55 # Track whether `conn` needs to be released before 13:34:55 # returning/raising/recursing. Update this variable if necessary, and 13:34:55 # leave `release_conn` constant throughout the function. That way, if 13:34:55 # the function recurses, the original value of `release_conn` will be 13:34:55 # passed down into the recursive call, and its value will be respected. 13:34:55 # 13:34:55 # See issue #651 [1] for details. 13:34:55 # 13:34:55 # [1] 13:34:55 release_this_conn = release_conn 13:34:55 13:34:55 http_tunnel_required = connection_requires_http_tunnel( 13:34:55 self.proxy, self.proxy_config, destination_scheme 13:34:55 ) 13:34:55 13:34:55 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 13:34:55 # have to copy the headers dict so we can safely change it without those 13:34:55 # changes being reflected in anyone else's copy. 13:34:55 if not http_tunnel_required: 13:34:55 headers = headers.copy() # type: ignore[attr-defined] 13:34:55 headers.update(self.proxy_headers) # type: ignore[union-attr] 13:34:55 13:34:55 # Must keep the exception bound to a separate variable or else Python 3 13:34:55 # complains about UnboundLocalError. 13:34:55 err = None 13:34:55 13:34:55 # Keep track of whether we cleanly exited the except block. This 13:34:55 # ensures we do proper cleanup in finally. 13:34:55 clean_exit = False 13:34:55 13:34:55 # Rewind body position, if needed. Record current position 13:34:55 # for future rewinds in the event of a redirect/retry. 13:34:55 body_pos = set_file_position(body, body_pos) 13:34:55 13:34:55 try: 13:34:55 # Request a connection from the queue. 13:34:55 timeout_obj = self._get_timeout(timeout) 13:34:55 conn = self._get_conn(timeout=pool_timeout) 13:34:55 13:34:55 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 13:34:55 13:34:55 # Is this a closed/new connection that requires CONNECT tunnelling? 13:34:55 if self.proxy is not None and http_tunnel_required and conn.is_closed: 13:34:55 try: 13:34:55 self._prepare_proxy(conn) 13:34:55 except (BaseSSLError, OSError, SocketTimeout) as e: 13:34:55 self._raise_timeout( 13:34:55 err=e, url=self.proxy.url, timeout_value=conn.timeout 13:34:55 ) 13:34:55 raise 13:34:55 13:34:55 # If we're going to release the connection in ``finally:``, then 13:34:55 # the response doesn't need to know about the connection. Otherwise 13:34:55 # it will also try to release it and we'll have a double-release 13:34:55 # mess. 13:34:55 response_conn = conn if not release_conn else None 13:34:55 13:34:55 # Make the request on the HTTPConnection object 13:34:55 > response = self._make_request( 13:34:55 conn, 13:34:55 method, 13:34:55 url, 13:34:55 timeout=timeout_obj, 13:34:55 body=body, 13:34:55 headers=headers, 13:34:55 chunked=chunked, 13:34:55 retries=retries, 13:34:55 response_conn=response_conn, 13:34:55 preload_content=preload_content, 13:34:55 decode_content=decode_content, 13:34:55 **response_kw, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 13:34:55 conn.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 13:34:55 self.endheaders() 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 13:34:55 self._send_output(message_body, encode_chunked=encode_chunked) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 13:34:55 self.send(msg) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 13:34:55 self.connect() 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 13:34:55 self.sock = self._new_conn() 13:34:55 ^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 except socket.gaierror as e: 13:34:55 raise NameResolutionError(self.host, self, e) from e 13:34:55 except SocketTimeout as e: 13:34:55 raise ConnectTimeoutError( 13:34:55 self, 13:34:55 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 13:34:55 ) from e 13:34:55 13:34:55 except OSError as e: 13:34:55 > raise NewConnectionError( 13:34:55 self, f"Failed to establish a new connection: {e}" 13:34:55 ) from e 13:34:55 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 > resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 13:34:55 retries = retries.increment( 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2' 13:34:55 response = None 13:34:55 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 13:34:55 _pool = 13:34:55 _stacktrace = 13:34:55 13:34:55 def increment( 13:34:55 self, 13:34:55 method: str | None = None, 13:34:55 url: str | None = None, 13:34:55 response: BaseHTTPResponse | None = None, 13:34:55 error: Exception | None = None, 13:34:55 _pool: ConnectionPool | None = None, 13:34:55 _stacktrace: TracebackType | None = None, 13:34:55 ) -> Self: 13:34:55 """Return a new Retry object with incremented retry counters. 13:34:55 13:34:55 :param response: A response object, or None, if the server did not 13:34:55 return a response. 13:34:55 :type response: :class:`~urllib3.response.BaseHTTPResponse` 13:34:55 :param Exception error: An error encountered during the request, or 13:34:55 None if the response was received successfully. 13:34:55 13:34:55 :return: A new ``Retry`` object. 13:34:55 """ 13:34:55 if self.total is False and error: 13:34:55 # Disabled, indicate to re-raise the error. 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 13:34:55 total = self.total 13:34:55 if total is not None: 13:34:55 total -= 1 13:34:55 13:34:55 connect = self.connect 13:34:55 read = self.read 13:34:55 redirect = self.redirect 13:34:55 status_count = self.status 13:34:55 other = self.other 13:34:55 cause = "unknown" 13:34:55 status = None 13:34:55 redirect_location = None 13:34:55 13:34:55 if error and self._is_connection_error(error): 13:34:55 # Connect retry? 13:34:55 if connect is False: 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif connect is not None: 13:34:55 connect -= 1 13:34:55 13:34:55 elif error and self._is_read_error(error): 13:34:55 # Read retry? 13:34:55 if read is False or method is None or not self._is_method_retryable(method): 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif read is not None: 13:34:55 read -= 1 13:34:55 13:34:55 elif error: 13:34:55 # Other retry? 13:34:55 if other is not None: 13:34:55 other -= 1 13:34:55 13:34:55 elif response and response.get_redirect_location(): 13:34:55 # Redirect retry? 13:34:55 if redirect is not None: 13:34:55 redirect -= 1 13:34:55 cause = "too many redirects" 13:34:55 response_redirect_location = response.get_redirect_location() 13:34:55 if response_redirect_location: 13:34:55 redirect_location = response_redirect_location 13:34:55 status = response.status 13:34:55 13:34:55 else: 13:34:55 # Incrementing because of a server error like a 500 in 13:34:55 # status_forcelist and the given method is in the allowed_methods 13:34:55 cause = ResponseError.GENERIC_ERROR 13:34:55 if response and response.status: 13:34:55 if status_count is not None: 13:34:55 status_count -= 1 13:34:55 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 13:34:55 status = response.status 13:34:55 13:34:55 history = self.history + ( 13:34:55 RequestHistory(method, url, error, status, redirect_location), 13:34:55 ) 13:34:55 13:34:55 new_retry = self.new( 13:34:55 total=total, 13:34:55 connect=connect, 13:34:55 read=read, 13:34:55 redirect=redirect, 13:34:55 status=status_count, 13:34:55 other=other, 13:34:55 history=history, 13:34:55 ) 13:34:55 13:34:55 if new_retry.is_exhausted(): 13:34:55 reason = error or ResponseError(cause) 13:34:55 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 13:34:55 13:34:55 During handling of the above exception, another exception occurred: 13:34:55 13:34:55 self = 13:34:55 13:34:55 def test_13_xpdr_portmapping_CLIENT2(self): 13:34:55 > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT2") 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 13:34:55 transportpce_tests/1.2.1/test01_portmapping.py:159: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 13:34:55 response = get_request(target_url) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 transportpce_tests/common/test_utils.py:117: in get_request 13:34:55 return requests.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 13:34:55 return session.request(method=method, url=url, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 13:34:55 resp = self.send(prep, **send_kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 13:34:55 r = adapter.send(request, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 except (ProtocolError, OSError) as err: 13:34:55 raise ConnectionError(err, request=request) 13:34:55 13:34:55 except MaxRetryError as e: 13:34:55 if isinstance(e.reason, ConnectTimeoutError): 13:34:55 # TODO: Remove this in 3.0.0: see #2811 13:34:55 if not isinstance(e.reason, NewConnectionError): 13:34:55 raise ConnectTimeout(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, ResponseError): 13:34:55 raise RetryError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _ProxyError): 13:34:55 raise ProxyError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _SSLError): 13:34:55 # This branch is for urllib3 v1.22 and later. 13:34:55 raise SSLError(e, request=request) 13:34:55 13:34:55 > raise ConnectionError(e, request=request) 13:34:55 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 13:34:55 ----------------------------- Captured stdout call ----------------------------- 13:34:55 execution of test_13_xpdr_portmapping_CLIENT2 13:34:55 _________ TestTransportPCEPortmapping.test_14_xpdr_portmapping_CLIENT3 _________ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 > sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 13:34:55 raise err 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 address = ('localhost', 8191), timeout = 30, source_address = None 13:34:55 socket_options = [(6, 1, 1)] 13:34:55 13:34:55 def create_connection( 13:34:55 address: tuple[str, int], 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 source_address: tuple[str, int] | None = None, 13:34:55 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 13:34:55 ) -> socket.socket: 13:34:55 """Connect to *address* and return the socket object. 13:34:55 13:34:55 Convenience function. Connect to *address* (a 2-tuple ``(host, 13:34:55 port)``) and return the socket object. Passing the optional 13:34:55 *timeout* parameter will set the timeout on the socket instance 13:34:55 before attempting to connect. If no *timeout* is supplied, the 13:34:55 global default timeout setting returned by :func:`socket.getdefaulttimeout` 13:34:55 is used. If *source_address* is set it must be a tuple of (host, port) 13:34:55 for the socket to bind as a source address before making the connection. 13:34:55 An host of '' or port 0 tells the OS to use the default. 13:34:55 """ 13:34:55 13:34:55 host, port = address 13:34:55 if host.startswith("["): 13:34:55 host = host.strip("[]") 13:34:55 err = None 13:34:55 13:34:55 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 13:34:55 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 13:34:55 # The original create_connection function always returns all records. 13:34:55 family = allowed_gai_family() 13:34:55 13:34:55 try: 13:34:55 host.encode("idna") 13:34:55 except UnicodeError: 13:34:55 raise LocationParseError(f"'{host}', label empty or too long") from None 13:34:55 13:34:55 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 13:34:55 af, socktype, proto, canonname, sa = res 13:34:55 sock = None 13:34:55 try: 13:34:55 sock = socket.socket(af, socktype, proto) 13:34:55 13:34:55 # If provided, set socket level options before connecting. 13:34:55 _set_socket_options(sock, socket_options) 13:34:55 13:34:55 if timeout is not _DEFAULT_TIMEOUT: 13:34:55 sock.settimeout(timeout) 13:34:55 if source_address: 13:34:55 sock.bind(source_address) 13:34:55 > sock.connect(sa) 13:34:55 E ConnectionRefusedError: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3' 13:34:55 body = None 13:34:55 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 13:34:55 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 redirect = False, assert_same_host = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 13:34:55 release_conn = False, chunked = False, body_pos = None, preload_content = False 13:34:55 decode_content = False, response_kw = {} 13:34:55 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3', query=None, fragment=None) 13:34:55 destination_scheme = None, conn = None, release_this_conn = True 13:34:55 http_tunnel_required = False, err = None, clean_exit = False 13:34:55 13:34:55 def urlopen( # type: ignore[override] 13:34:55 self, 13:34:55 method: str, 13:34:55 url: str, 13:34:55 body: _TYPE_BODY | None = None, 13:34:55 headers: typing.Mapping[str, str] | None = None, 13:34:55 retries: Retry | bool | int | None = None, 13:34:55 redirect: bool = True, 13:34:55 assert_same_host: bool = True, 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 pool_timeout: int | None = None, 13:34:55 release_conn: bool | None = None, 13:34:55 chunked: bool = False, 13:34:55 body_pos: _TYPE_BODY_POSITION | None = None, 13:34:55 preload_content: bool = True, 13:34:55 decode_content: bool = True, 13:34:55 **response_kw: typing.Any, 13:34:55 ) -> BaseHTTPResponse: 13:34:55 """ 13:34:55 Get a connection from the pool and perform an HTTP request. This is the 13:34:55 lowest level call for making a request, so you'll need to specify all 13:34:55 the raw details. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 More commonly, it's appropriate to use a convenience method 13:34:55 such as :meth:`request`. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 `release_conn` will only behave as expected if 13:34:55 `preload_content=False` because we want to make 13:34:55 `preload_content=False` the default behaviour someday soon without 13:34:55 breaking backwards compatibility. 13:34:55 13:34:55 :param method: 13:34:55 HTTP request method (such as GET, POST, PUT, etc.) 13:34:55 13:34:55 :param url: 13:34:55 The URL to perform the request on. 13:34:55 13:34:55 :param body: 13:34:55 Data to send in the request body, either :class:`str`, :class:`bytes`, 13:34:55 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 13:34:55 13:34:55 :param headers: 13:34:55 Dictionary of custom headers to send, such as User-Agent, 13:34:55 If-None-Match, etc. If None, pool headers are used. If provided, 13:34:55 these headers completely replace any pool-specific headers. 13:34:55 13:34:55 :param retries: 13:34:55 Configure the number of retries to allow before raising a 13:34:55 :class:`~urllib3.exceptions.MaxRetryError` exception. 13:34:55 13:34:55 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 13:34:55 :class:`~urllib3.util.retry.Retry` object for fine-grained control 13:34:55 over different types of retries. 13:34:55 Pass an integer number to retry connection errors that many times, 13:34:55 but no other types of errors. Pass zero to never retry. 13:34:55 13:34:55 If ``False``, then retries are disabled and any exception is raised 13:34:55 immediately. Also, instead of raising a MaxRetryError on redirects, 13:34:55 the redirect response will be returned. 13:34:55 13:34:55 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 13:34:55 13:34:55 :param redirect: 13:34:55 If True, automatically handle redirects (status codes 301, 302, 13:34:55 303, 307, 308). Each redirect counts as a retry. Disabling retries 13:34:55 will disable redirect, too. 13:34:55 13:34:55 :param assert_same_host: 13:34:55 If ``True``, will make sure that the host of the pool requests is 13:34:55 consistent else will raise HostChangedError. When ``False``, you can 13:34:55 use the pool on an HTTP proxy and request foreign hosts. 13:34:55 13:34:55 :param timeout: 13:34:55 If specified, overrides the default timeout for this one 13:34:55 request. It may be a float (in seconds) or an instance of 13:34:55 :class:`urllib3.util.Timeout`. 13:34:55 13:34:55 :param pool_timeout: 13:34:55 If set and the pool is set to block=True, then this method will 13:34:55 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 13:34:55 connection is available within the time period. 13:34:55 13:34:55 :param bool preload_content: 13:34:55 If True, the response's body will be preloaded into memory. 13:34:55 13:34:55 :param bool decode_content: 13:34:55 If True, will attempt to decode the body based on the 13:34:55 'content-encoding' header. 13:34:55 13:34:55 :param release_conn: 13:34:55 If False, then the urlopen call will not release the connection 13:34:55 back into the pool once a response is received (but will release if 13:34:55 you read the entire contents of the response such as when 13:34:55 `preload_content=True`). This is useful if you're not preloading 13:34:55 the response's content immediately. You will need to call 13:34:55 ``r.release_conn()`` on the response ``r`` to return the connection 13:34:55 back into the pool. If None, it takes the value of ``preload_content`` 13:34:55 which defaults to ``True``. 13:34:55 13:34:55 :param bool chunked: 13:34:55 If True, urllib3 will send the body using chunked transfer 13:34:55 encoding. Otherwise, urllib3 will send the body using the standard 13:34:55 content-length form. Defaults to False. 13:34:55 13:34:55 :param int body_pos: 13:34:55 Position to seek to in file-like body in the event of a retry or 13:34:55 redirect. Typically this won't need to be set because urllib3 will 13:34:55 auto-populate the value when needed. 13:34:55 """ 13:34:55 parsed_url = parse_url(url) 13:34:55 destination_scheme = parsed_url.scheme 13:34:55 13:34:55 if headers is None: 13:34:55 headers = self.headers 13:34:55 13:34:55 if not isinstance(retries, Retry): 13:34:55 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 13:34:55 13:34:55 if release_conn is None: 13:34:55 release_conn = preload_content 13:34:55 13:34:55 # Check host 13:34:55 if assert_same_host and not self.is_same_host(url): 13:34:55 raise HostChangedError(self, url, retries) 13:34:55 13:34:55 # Ensure that the URL we're connecting to is properly encoded 13:34:55 if url.startswith("/"): 13:34:55 url = to_str(_encode_target(url)) 13:34:55 else: 13:34:55 url = to_str(parsed_url.url) 13:34:55 13:34:55 conn = None 13:34:55 13:34:55 # Track whether `conn` needs to be released before 13:34:55 # returning/raising/recursing. Update this variable if necessary, and 13:34:55 # leave `release_conn` constant throughout the function. That way, if 13:34:55 # the function recurses, the original value of `release_conn` will be 13:34:55 # passed down into the recursive call, and its value will be respected. 13:34:55 # 13:34:55 # See issue #651 [1] for details. 13:34:55 # 13:34:55 # [1] 13:34:55 release_this_conn = release_conn 13:34:55 13:34:55 http_tunnel_required = connection_requires_http_tunnel( 13:34:55 self.proxy, self.proxy_config, destination_scheme 13:34:55 ) 13:34:55 13:34:55 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 13:34:55 # have to copy the headers dict so we can safely change it without those 13:34:55 # changes being reflected in anyone else's copy. 13:34:55 if not http_tunnel_required: 13:34:55 headers = headers.copy() # type: ignore[attr-defined] 13:34:55 headers.update(self.proxy_headers) # type: ignore[union-attr] 13:34:55 13:34:55 # Must keep the exception bound to a separate variable or else Python 3 13:34:55 # complains about UnboundLocalError. 13:34:55 err = None 13:34:55 13:34:55 # Keep track of whether we cleanly exited the except block. This 13:34:55 # ensures we do proper cleanup in finally. 13:34:55 clean_exit = False 13:34:55 13:34:55 # Rewind body position, if needed. Record current position 13:34:55 # for future rewinds in the event of a redirect/retry. 13:34:55 body_pos = set_file_position(body, body_pos) 13:34:55 13:34:55 try: 13:34:55 # Request a connection from the queue. 13:34:55 timeout_obj = self._get_timeout(timeout) 13:34:55 conn = self._get_conn(timeout=pool_timeout) 13:34:55 13:34:55 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 13:34:55 13:34:55 # Is this a closed/new connection that requires CONNECT tunnelling? 13:34:55 if self.proxy is not None and http_tunnel_required and conn.is_closed: 13:34:55 try: 13:34:55 self._prepare_proxy(conn) 13:34:55 except (BaseSSLError, OSError, SocketTimeout) as e: 13:34:55 self._raise_timeout( 13:34:55 err=e, url=self.proxy.url, timeout_value=conn.timeout 13:34:55 ) 13:34:55 raise 13:34:55 13:34:55 # If we're going to release the connection in ``finally:``, then 13:34:55 # the response doesn't need to know about the connection. Otherwise 13:34:55 # it will also try to release it and we'll have a double-release 13:34:55 # mess. 13:34:55 response_conn = conn if not release_conn else None 13:34:55 13:34:55 # Make the request on the HTTPConnection object 13:34:55 > response = self._make_request( 13:34:55 conn, 13:34:55 method, 13:34:55 url, 13:34:55 timeout=timeout_obj, 13:34:55 body=body, 13:34:55 headers=headers, 13:34:55 chunked=chunked, 13:34:55 retries=retries, 13:34:55 response_conn=response_conn, 13:34:55 preload_content=preload_content, 13:34:55 decode_content=decode_content, 13:34:55 **response_kw, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 13:34:55 conn.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 13:34:55 self.endheaders() 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 13:34:55 self._send_output(message_body, encode_chunked=encode_chunked) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 13:34:55 self.send(msg) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 13:34:55 self.connect() 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 13:34:55 self.sock = self._new_conn() 13:34:55 ^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 except socket.gaierror as e: 13:34:55 raise NameResolutionError(self.host, self, e) from e 13:34:55 except SocketTimeout as e: 13:34:55 raise ConnectTimeoutError( 13:34:55 self, 13:34:55 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 13:34:55 ) from e 13:34:55 13:34:55 except OSError as e: 13:34:55 > raise NewConnectionError( 13:34:55 self, f"Failed to establish a new connection: {e}" 13:34:55 ) from e 13:34:55 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 > resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 13:34:55 retries = retries.increment( 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3' 13:34:55 response = None 13:34:55 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 13:34:55 _pool = 13:34:55 _stacktrace = 13:34:55 13:34:55 def increment( 13:34:55 self, 13:34:55 method: str | None = None, 13:34:55 url: str | None = None, 13:34:55 response: BaseHTTPResponse | None = None, 13:34:55 error: Exception | None = None, 13:34:55 _pool: ConnectionPool | None = None, 13:34:55 _stacktrace: TracebackType | None = None, 13:34:55 ) -> Self: 13:34:55 """Return a new Retry object with incremented retry counters. 13:34:55 13:34:55 :param response: A response object, or None, if the server did not 13:34:55 return a response. 13:34:55 :type response: :class:`~urllib3.response.BaseHTTPResponse` 13:34:55 :param Exception error: An error encountered during the request, or 13:34:55 None if the response was received successfully. 13:34:55 13:34:55 :return: A new ``Retry`` object. 13:34:55 """ 13:34:55 if self.total is False and error: 13:34:55 # Disabled, indicate to re-raise the error. 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 13:34:55 total = self.total 13:34:55 if total is not None: 13:34:55 total -= 1 13:34:55 13:34:55 connect = self.connect 13:34:55 read = self.read 13:34:55 redirect = self.redirect 13:34:55 status_count = self.status 13:34:55 other = self.other 13:34:55 cause = "unknown" 13:34:55 status = None 13:34:55 redirect_location = None 13:34:55 13:34:55 if error and self._is_connection_error(error): 13:34:55 # Connect retry? 13:34:55 if connect is False: 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif connect is not None: 13:34:55 connect -= 1 13:34:55 13:34:55 elif error and self._is_read_error(error): 13:34:55 # Read retry? 13:34:55 if read is False or method is None or not self._is_method_retryable(method): 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif read is not None: 13:34:55 read -= 1 13:34:55 13:34:55 elif error: 13:34:55 # Other retry? 13:34:55 if other is not None: 13:34:55 other -= 1 13:34:55 13:34:55 elif response and response.get_redirect_location(): 13:34:55 # Redirect retry? 13:34:55 if redirect is not None: 13:34:55 redirect -= 1 13:34:55 cause = "too many redirects" 13:34:55 response_redirect_location = response.get_redirect_location() 13:34:55 if response_redirect_location: 13:34:55 redirect_location = response_redirect_location 13:34:55 status = response.status 13:34:55 13:34:55 else: 13:34:55 # Incrementing because of a server error like a 500 in 13:34:55 # status_forcelist and the given method is in the allowed_methods 13:34:55 cause = ResponseError.GENERIC_ERROR 13:34:55 if response and response.status: 13:34:55 if status_count is not None: 13:34:55 status_count -= 1 13:34:55 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 13:34:55 status = response.status 13:34:55 13:34:55 history = self.history + ( 13:34:55 RequestHistory(method, url, error, status, redirect_location), 13:34:55 ) 13:34:55 13:34:55 new_retry = self.new( 13:34:55 total=total, 13:34:55 connect=connect, 13:34:55 read=read, 13:34:55 redirect=redirect, 13:34:55 status=status_count, 13:34:55 other=other, 13:34:55 history=history, 13:34:55 ) 13:34:55 13:34:55 if new_retry.is_exhausted(): 13:34:55 reason = error or ResponseError(cause) 13:34:55 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 13:34:55 13:34:55 During handling of the above exception, another exception occurred: 13:34:55 13:34:55 self = 13:34:55 13:34:55 def test_14_xpdr_portmapping_CLIENT3(self): 13:34:55 > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT3") 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 13:34:55 transportpce_tests/1.2.1/test01_portmapping.py:170: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 13:34:55 response = get_request(target_url) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 transportpce_tests/common/test_utils.py:117: in get_request 13:34:55 return requests.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 13:34:55 return session.request(method=method, url=url, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 13:34:55 resp = self.send(prep, **send_kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 13:34:55 r = adapter.send(request, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 except (ProtocolError, OSError) as err: 13:34:55 raise ConnectionError(err, request=request) 13:34:55 13:34:55 except MaxRetryError as e: 13:34:55 if isinstance(e.reason, ConnectTimeoutError): 13:34:55 # TODO: Remove this in 3.0.0: see #2811 13:34:55 if not isinstance(e.reason, NewConnectionError): 13:34:55 raise ConnectTimeout(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, ResponseError): 13:34:55 raise RetryError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _ProxyError): 13:34:55 raise ProxyError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _SSLError): 13:34:55 # This branch is for urllib3 v1.22 and later. 13:34:55 raise SSLError(e, request=request) 13:34:55 13:34:55 > raise ConnectionError(e, request=request) 13:34:55 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 13:34:55 ----------------------------- Captured stdout call ----------------------------- 13:34:55 execution of test_14_xpdr_portmapping_CLIENT3 13:34:55 _________ TestTransportPCEPortmapping.test_15_xpdr_portmapping_CLIENT4 _________ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 > sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 13:34:55 raise err 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 address = ('localhost', 8191), timeout = 30, source_address = None 13:34:55 socket_options = [(6, 1, 1)] 13:34:55 13:34:55 def create_connection( 13:34:55 address: tuple[str, int], 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 source_address: tuple[str, int] | None = None, 13:34:55 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 13:34:55 ) -> socket.socket: 13:34:55 """Connect to *address* and return the socket object. 13:34:55 13:34:55 Convenience function. Connect to *address* (a 2-tuple ``(host, 13:34:55 port)``) and return the socket object. Passing the optional 13:34:55 *timeout* parameter will set the timeout on the socket instance 13:34:55 before attempting to connect. If no *timeout* is supplied, the 13:34:55 global default timeout setting returned by :func:`socket.getdefaulttimeout` 13:34:55 is used. If *source_address* is set it must be a tuple of (host, port) 13:34:55 for the socket to bind as a source address before making the connection. 13:34:55 An host of '' or port 0 tells the OS to use the default. 13:34:55 """ 13:34:55 13:34:55 host, port = address 13:34:55 if host.startswith("["): 13:34:55 host = host.strip("[]") 13:34:55 err = None 13:34:55 13:34:55 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 13:34:55 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 13:34:55 # The original create_connection function always returns all records. 13:34:55 family = allowed_gai_family() 13:34:55 13:34:55 try: 13:34:55 host.encode("idna") 13:34:55 except UnicodeError: 13:34:55 raise LocationParseError(f"'{host}', label empty or too long") from None 13:34:55 13:34:55 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 13:34:55 af, socktype, proto, canonname, sa = res 13:34:55 sock = None 13:34:55 try: 13:34:55 sock = socket.socket(af, socktype, proto) 13:34:55 13:34:55 # If provided, set socket level options before connecting. 13:34:55 _set_socket_options(sock, socket_options) 13:34:55 13:34:55 if timeout is not _DEFAULT_TIMEOUT: 13:34:55 sock.settimeout(timeout) 13:34:55 if source_address: 13:34:55 sock.bind(source_address) 13:34:55 > sock.connect(sa) 13:34:55 E ConnectionRefusedError: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4' 13:34:55 body = None 13:34:55 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 13:34:55 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 redirect = False, assert_same_host = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 13:34:55 release_conn = False, chunked = False, body_pos = None, preload_content = False 13:34:55 decode_content = False, response_kw = {} 13:34:55 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4', query=None, fragment=None) 13:34:55 destination_scheme = None, conn = None, release_this_conn = True 13:34:55 http_tunnel_required = False, err = None, clean_exit = False 13:34:55 13:34:55 def urlopen( # type: ignore[override] 13:34:55 self, 13:34:55 method: str, 13:34:55 url: str, 13:34:55 body: _TYPE_BODY | None = None, 13:34:55 headers: typing.Mapping[str, str] | None = None, 13:34:55 retries: Retry | bool | int | None = None, 13:34:55 redirect: bool = True, 13:34:55 assert_same_host: bool = True, 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 pool_timeout: int | None = None, 13:34:55 release_conn: bool | None = None, 13:34:55 chunked: bool = False, 13:34:55 body_pos: _TYPE_BODY_POSITION | None = None, 13:34:55 preload_content: bool = True, 13:34:55 decode_content: bool = True, 13:34:55 **response_kw: typing.Any, 13:34:55 ) -> BaseHTTPResponse: 13:34:55 """ 13:34:55 Get a connection from the pool and perform an HTTP request. This is the 13:34:55 lowest level call for making a request, so you'll need to specify all 13:34:55 the raw details. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 More commonly, it's appropriate to use a convenience method 13:34:55 such as :meth:`request`. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 `release_conn` will only behave as expected if 13:34:55 `preload_content=False` because we want to make 13:34:55 `preload_content=False` the default behaviour someday soon without 13:34:55 breaking backwards compatibility. 13:34:55 13:34:55 :param method: 13:34:55 HTTP request method (such as GET, POST, PUT, etc.) 13:34:55 13:34:55 :param url: 13:34:55 The URL to perform the request on. 13:34:55 13:34:55 :param body: 13:34:55 Data to send in the request body, either :class:`str`, :class:`bytes`, 13:34:55 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 13:34:55 13:34:55 :param headers: 13:34:55 Dictionary of custom headers to send, such as User-Agent, 13:34:55 If-None-Match, etc. If None, pool headers are used. If provided, 13:34:55 these headers completely replace any pool-specific headers. 13:34:55 13:34:55 :param retries: 13:34:55 Configure the number of retries to allow before raising a 13:34:55 :class:`~urllib3.exceptions.MaxRetryError` exception. 13:34:55 13:34:55 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 13:34:55 :class:`~urllib3.util.retry.Retry` object for fine-grained control 13:34:55 over different types of retries. 13:34:55 Pass an integer number to retry connection errors that many times, 13:34:55 but no other types of errors. Pass zero to never retry. 13:34:55 13:34:55 If ``False``, then retries are disabled and any exception is raised 13:34:55 immediately. Also, instead of raising a MaxRetryError on redirects, 13:34:55 the redirect response will be returned. 13:34:55 13:34:55 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 13:34:55 13:34:55 :param redirect: 13:34:55 If True, automatically handle redirects (status codes 301, 302, 13:34:55 303, 307, 308). Each redirect counts as a retry. Disabling retries 13:34:55 will disable redirect, too. 13:34:55 13:34:55 :param assert_same_host: 13:34:55 If ``True``, will make sure that the host of the pool requests is 13:34:55 consistent else will raise HostChangedError. When ``False``, you can 13:34:55 use the pool on an HTTP proxy and request foreign hosts. 13:34:55 13:34:55 :param timeout: 13:34:55 If specified, overrides the default timeout for this one 13:34:55 request. It may be a float (in seconds) or an instance of 13:34:55 :class:`urllib3.util.Timeout`. 13:34:55 13:34:55 :param pool_timeout: 13:34:55 If set and the pool is set to block=True, then this method will 13:34:55 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 13:34:55 connection is available within the time period. 13:34:55 13:34:55 :param bool preload_content: 13:34:55 If True, the response's body will be preloaded into memory. 13:34:55 13:34:55 :param bool decode_content: 13:34:55 If True, will attempt to decode the body based on the 13:34:55 'content-encoding' header. 13:34:55 13:34:55 :param release_conn: 13:34:55 If False, then the urlopen call will not release the connection 13:34:55 back into the pool once a response is received (but will release if 13:34:55 you read the entire contents of the response such as when 13:34:55 `preload_content=True`). This is useful if you're not preloading 13:34:55 the response's content immediately. You will need to call 13:34:55 ``r.release_conn()`` on the response ``r`` to return the connection 13:34:55 back into the pool. If None, it takes the value of ``preload_content`` 13:34:55 which defaults to ``True``. 13:34:55 13:34:55 :param bool chunked: 13:34:55 If True, urllib3 will send the body using chunked transfer 13:34:55 encoding. Otherwise, urllib3 will send the body using the standard 13:34:55 content-length form. Defaults to False. 13:34:55 13:34:55 :param int body_pos: 13:34:55 Position to seek to in file-like body in the event of a retry or 13:34:55 redirect. Typically this won't need to be set because urllib3 will 13:34:55 auto-populate the value when needed. 13:34:55 """ 13:34:55 parsed_url = parse_url(url) 13:34:55 destination_scheme = parsed_url.scheme 13:34:55 13:34:55 if headers is None: 13:34:55 headers = self.headers 13:34:55 13:34:55 if not isinstance(retries, Retry): 13:34:55 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 13:34:55 13:34:55 if release_conn is None: 13:34:55 release_conn = preload_content 13:34:55 13:34:55 # Check host 13:34:55 if assert_same_host and not self.is_same_host(url): 13:34:55 raise HostChangedError(self, url, retries) 13:34:55 13:34:55 # Ensure that the URL we're connecting to is properly encoded 13:34:55 if url.startswith("/"): 13:34:55 url = to_str(_encode_target(url)) 13:34:55 else: 13:34:55 url = to_str(parsed_url.url) 13:34:55 13:34:55 conn = None 13:34:55 13:34:55 # Track whether `conn` needs to be released before 13:34:55 # returning/raising/recursing. Update this variable if necessary, and 13:34:55 # leave `release_conn` constant throughout the function. That way, if 13:34:55 # the function recurses, the original value of `release_conn` will be 13:34:55 # passed down into the recursive call, and its value will be respected. 13:34:55 # 13:34:55 # See issue #651 [1] for details. 13:34:55 # 13:34:55 # [1] 13:34:55 release_this_conn = release_conn 13:34:55 13:34:55 http_tunnel_required = connection_requires_http_tunnel( 13:34:55 self.proxy, self.proxy_config, destination_scheme 13:34:55 ) 13:34:55 13:34:55 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 13:34:55 # have to copy the headers dict so we can safely change it without those 13:34:55 # changes being reflected in anyone else's copy. 13:34:55 if not http_tunnel_required: 13:34:55 headers = headers.copy() # type: ignore[attr-defined] 13:34:55 headers.update(self.proxy_headers) # type: ignore[union-attr] 13:34:55 13:34:55 # Must keep the exception bound to a separate variable or else Python 3 13:34:55 # complains about UnboundLocalError. 13:34:55 err = None 13:34:55 13:34:55 # Keep track of whether we cleanly exited the except block. This 13:34:55 # ensures we do proper cleanup in finally. 13:34:55 clean_exit = False 13:34:55 13:34:55 # Rewind body position, if needed. Record current position 13:34:55 # for future rewinds in the event of a redirect/retry. 13:34:55 body_pos = set_file_position(body, body_pos) 13:34:55 13:34:55 try: 13:34:55 # Request a connection from the queue. 13:34:55 timeout_obj = self._get_timeout(timeout) 13:34:55 conn = self._get_conn(timeout=pool_timeout) 13:34:55 13:34:55 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 13:34:55 13:34:55 # Is this a closed/new connection that requires CONNECT tunnelling? 13:34:55 if self.proxy is not None and http_tunnel_required and conn.is_closed: 13:34:55 try: 13:34:55 self._prepare_proxy(conn) 13:34:55 except (BaseSSLError, OSError, SocketTimeout) as e: 13:34:55 self._raise_timeout( 13:34:55 err=e, url=self.proxy.url, timeout_value=conn.timeout 13:34:55 ) 13:34:55 raise 13:34:55 13:34:55 # If we're going to release the connection in ``finally:``, then 13:34:55 # the response doesn't need to know about the connection. Otherwise 13:34:55 # it will also try to release it and we'll have a double-release 13:34:55 # mess. 13:34:55 response_conn = conn if not release_conn else None 13:34:55 13:34:55 # Make the request on the HTTPConnection object 13:34:55 > response = self._make_request( 13:34:55 conn, 13:34:55 method, 13:34:55 url, 13:34:55 timeout=timeout_obj, 13:34:55 body=body, 13:34:55 headers=headers, 13:34:55 chunked=chunked, 13:34:55 retries=retries, 13:34:55 response_conn=response_conn, 13:34:55 preload_content=preload_content, 13:34:55 decode_content=decode_content, 13:34:55 **response_kw, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 13:34:55 conn.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 13:34:55 self.endheaders() 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 13:34:55 self._send_output(message_body, encode_chunked=encode_chunked) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 13:34:55 self.send(msg) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 13:34:55 self.connect() 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 13:34:55 self.sock = self._new_conn() 13:34:55 ^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 except socket.gaierror as e: 13:34:55 raise NameResolutionError(self.host, self, e) from e 13:34:55 except SocketTimeout as e: 13:34:55 raise ConnectTimeoutError( 13:34:55 self, 13:34:55 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 13:34:55 ) from e 13:34:55 13:34:55 except OSError as e: 13:34:55 > raise NewConnectionError( 13:34:55 self, f"Failed to establish a new connection: {e}" 13:34:55 ) from e 13:34:55 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 > resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 13:34:55 retries = retries.increment( 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4' 13:34:55 response = None 13:34:55 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 13:34:55 _pool = 13:34:55 _stacktrace = 13:34:55 13:34:55 def increment( 13:34:55 self, 13:34:55 method: str | None = None, 13:34:55 url: str | None = None, 13:34:55 response: BaseHTTPResponse | None = None, 13:34:55 error: Exception | None = None, 13:34:55 _pool: ConnectionPool | None = None, 13:34:55 _stacktrace: TracebackType | None = None, 13:34:55 ) -> Self: 13:34:55 """Return a new Retry object with incremented retry counters. 13:34:55 13:34:55 :param response: A response object, or None, if the server did not 13:34:55 return a response. 13:34:55 :type response: :class:`~urllib3.response.BaseHTTPResponse` 13:34:55 :param Exception error: An error encountered during the request, or 13:34:55 None if the response was received successfully. 13:34:55 13:34:55 :return: A new ``Retry`` object. 13:34:55 """ 13:34:55 if self.total is False and error: 13:34:55 # Disabled, indicate to re-raise the error. 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 13:34:55 total = self.total 13:34:55 if total is not None: 13:34:55 total -= 1 13:34:55 13:34:55 connect = self.connect 13:34:55 read = self.read 13:34:55 redirect = self.redirect 13:34:55 status_count = self.status 13:34:55 other = self.other 13:34:55 cause = "unknown" 13:34:55 status = None 13:34:55 redirect_location = None 13:34:55 13:34:55 if error and self._is_connection_error(error): 13:34:55 # Connect retry? 13:34:55 if connect is False: 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif connect is not None: 13:34:55 connect -= 1 13:34:55 13:34:55 elif error and self._is_read_error(error): 13:34:55 # Read retry? 13:34:55 if read is False or method is None or not self._is_method_retryable(method): 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif read is not None: 13:34:55 read -= 1 13:34:55 13:34:55 elif error: 13:34:55 # Other retry? 13:34:55 if other is not None: 13:34:55 other -= 1 13:34:55 13:34:55 elif response and response.get_redirect_location(): 13:34:55 # Redirect retry? 13:34:55 if redirect is not None: 13:34:55 redirect -= 1 13:34:55 cause = "too many redirects" 13:34:55 response_redirect_location = response.get_redirect_location() 13:34:55 if response_redirect_location: 13:34:55 redirect_location = response_redirect_location 13:34:55 status = response.status 13:34:55 13:34:55 else: 13:34:55 # Incrementing because of a server error like a 500 in 13:34:55 # status_forcelist and the given method is in the allowed_methods 13:34:55 cause = ResponseError.GENERIC_ERROR 13:34:55 if response and response.status: 13:34:55 if status_count is not None: 13:34:55 status_count -= 1 13:34:55 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 13:34:55 status = response.status 13:34:55 13:34:55 history = self.history + ( 13:34:55 RequestHistory(method, url, error, status, redirect_location), 13:34:55 ) 13:34:55 13:34:55 new_retry = self.new( 13:34:55 total=total, 13:34:55 connect=connect, 13:34:55 read=read, 13:34:55 redirect=redirect, 13:34:55 status=status_count, 13:34:55 other=other, 13:34:55 history=history, 13:34:55 ) 13:34:55 13:34:55 if new_retry.is_exhausted(): 13:34:55 reason = error or ResponseError(cause) 13:34:55 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 13:34:55 13:34:55 During handling of the above exception, another exception occurred: 13:34:55 13:34:55 self = 13:34:55 13:34:55 def test_15_xpdr_portmapping_CLIENT4(self): 13:34:55 > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT4") 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 13:34:55 transportpce_tests/1.2.1/test01_portmapping.py:182: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 13:34:55 response = get_request(target_url) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 transportpce_tests/common/test_utils.py:117: in get_request 13:34:55 return requests.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 13:34:55 return session.request(method=method, url=url, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 13:34:55 resp = self.send(prep, **send_kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 13:34:55 r = adapter.send(request, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 except (ProtocolError, OSError) as err: 13:34:55 raise ConnectionError(err, request=request) 13:34:55 13:34:55 except MaxRetryError as e: 13:34:55 if isinstance(e.reason, ConnectTimeoutError): 13:34:55 # TODO: Remove this in 3.0.0: see #2811 13:34:55 if not isinstance(e.reason, NewConnectionError): 13:34:55 raise ConnectTimeout(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, ResponseError): 13:34:55 raise RetryError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _ProxyError): 13:34:55 raise ProxyError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _SSLError): 13:34:55 # This branch is for urllib3 v1.22 and later. 13:34:55 raise SSLError(e, request=request) 13:34:55 13:34:55 > raise ConnectionError(e, request=request) 13:34:55 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 13:34:55 ----------------------------- Captured stdout call ----------------------------- 13:34:55 execution of test_15_xpdr_portmapping_CLIENT4 13:34:55 ________ TestTransportPCEPortmapping.test_16_xpdr_device_disconnection _________ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 > sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 13:34:55 raise err 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 address = ('localhost', 8191), timeout = 30, source_address = None 13:34:55 socket_options = [(6, 1, 1)] 13:34:55 13:34:55 def create_connection( 13:34:55 address: tuple[str, int], 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 source_address: tuple[str, int] | None = None, 13:34:55 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 13:34:55 ) -> socket.socket: 13:34:55 """Connect to *address* and return the socket object. 13:34:55 13:34:55 Convenience function. Connect to *address* (a 2-tuple ``(host, 13:34:55 port)``) and return the socket object. Passing the optional 13:34:55 *timeout* parameter will set the timeout on the socket instance 13:34:55 before attempting to connect. If no *timeout* is supplied, the 13:34:55 global default timeout setting returned by :func:`socket.getdefaulttimeout` 13:34:55 is used. If *source_address* is set it must be a tuple of (host, port) 13:34:55 for the socket to bind as a source address before making the connection. 13:34:55 An host of '' or port 0 tells the OS to use the default. 13:34:55 """ 13:34:55 13:34:55 host, port = address 13:34:55 if host.startswith("["): 13:34:55 host = host.strip("[]") 13:34:55 err = None 13:34:55 13:34:55 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 13:34:55 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 13:34:55 # The original create_connection function always returns all records. 13:34:55 family = allowed_gai_family() 13:34:55 13:34:55 try: 13:34:55 host.encode("idna") 13:34:55 except UnicodeError: 13:34:55 raise LocationParseError(f"'{host}', label empty or too long") from None 13:34:55 13:34:55 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 13:34:55 af, socktype, proto, canonname, sa = res 13:34:55 sock = None 13:34:55 try: 13:34:55 sock = socket.socket(af, socktype, proto) 13:34:55 13:34:55 # If provided, set socket level options before connecting. 13:34:55 _set_socket_options(sock, socket_options) 13:34:55 13:34:55 if timeout is not _DEFAULT_TIMEOUT: 13:34:55 sock.settimeout(timeout) 13:34:55 if source_address: 13:34:55 sock.bind(source_address) 13:34:55 > sock.connect(sa) 13:34:55 E ConnectionRefusedError: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 method = 'DELETE' 13:34:55 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01' 13:34:55 body = None 13:34:55 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 13:34:55 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 redirect = False, assert_same_host = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 13:34:55 release_conn = False, chunked = False, body_pos = None, preload_content = False 13:34:55 decode_content = False, response_kw = {} 13:34:55 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01', query=None, fragment=None) 13:34:55 destination_scheme = None, conn = None, release_this_conn = True 13:34:55 http_tunnel_required = False, err = None, clean_exit = False 13:34:55 13:34:55 def urlopen( # type: ignore[override] 13:34:55 self, 13:34:55 method: str, 13:34:55 url: str, 13:34:55 body: _TYPE_BODY | None = None, 13:34:55 headers: typing.Mapping[str, str] | None = None, 13:34:55 retries: Retry | bool | int | None = None, 13:34:55 redirect: bool = True, 13:34:55 assert_same_host: bool = True, 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 pool_timeout: int | None = None, 13:34:55 release_conn: bool | None = None, 13:34:55 chunked: bool = False, 13:34:55 body_pos: _TYPE_BODY_POSITION | None = None, 13:34:55 preload_content: bool = True, 13:34:55 decode_content: bool = True, 13:34:55 **response_kw: typing.Any, 13:34:55 ) -> BaseHTTPResponse: 13:34:55 """ 13:34:55 Get a connection from the pool and perform an HTTP request. This is the 13:34:55 lowest level call for making a request, so you'll need to specify all 13:34:55 the raw details. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 More commonly, it's appropriate to use a convenience method 13:34:55 such as :meth:`request`. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 `release_conn` will only behave as expected if 13:34:55 `preload_content=False` because we want to make 13:34:55 `preload_content=False` the default behaviour someday soon without 13:34:55 breaking backwards compatibility. 13:34:55 13:34:55 :param method: 13:34:55 HTTP request method (such as GET, POST, PUT, etc.) 13:34:55 13:34:55 :param url: 13:34:55 The URL to perform the request on. 13:34:55 13:34:55 :param body: 13:34:55 Data to send in the request body, either :class:`str`, :class:`bytes`, 13:34:55 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 13:34:55 13:34:55 :param headers: 13:34:55 Dictionary of custom headers to send, such as User-Agent, 13:34:55 If-None-Match, etc. If None, pool headers are used. If provided, 13:34:55 these headers completely replace any pool-specific headers. 13:34:55 13:34:55 :param retries: 13:34:55 Configure the number of retries to allow before raising a 13:34:55 :class:`~urllib3.exceptions.MaxRetryError` exception. 13:34:55 13:34:55 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 13:34:55 :class:`~urllib3.util.retry.Retry` object for fine-grained control 13:34:55 over different types of retries. 13:34:55 Pass an integer number to retry connection errors that many times, 13:34:55 but no other types of errors. Pass zero to never retry. 13:34:55 13:34:55 If ``False``, then retries are disabled and any exception is raised 13:34:55 immediately. Also, instead of raising a MaxRetryError on redirects, 13:34:55 the redirect response will be returned. 13:34:55 13:34:55 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 13:34:55 13:34:55 :param redirect: 13:34:55 If True, automatically handle redirects (status codes 301, 302, 13:34:55 303, 307, 308). Each redirect counts as a retry. Disabling retries 13:34:55 will disable redirect, too. 13:34:55 13:34:55 :param assert_same_host: 13:34:55 If ``True``, will make sure that the host of the pool requests is 13:34:55 consistent else will raise HostChangedError. When ``False``, you can 13:34:55 use the pool on an HTTP proxy and request foreign hosts. 13:34:55 13:34:55 :param timeout: 13:34:55 If specified, overrides the default timeout for this one 13:34:55 request. It may be a float (in seconds) or an instance of 13:34:55 :class:`urllib3.util.Timeout`. 13:34:55 13:34:55 :param pool_timeout: 13:34:55 If set and the pool is set to block=True, then this method will 13:34:55 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 13:34:55 connection is available within the time period. 13:34:55 13:34:55 :param bool preload_content: 13:34:55 If True, the response's body will be preloaded into memory. 13:34:55 13:34:55 :param bool decode_content: 13:34:55 If True, will attempt to decode the body based on the 13:34:55 'content-encoding' header. 13:34:55 13:34:55 :param release_conn: 13:34:55 If False, then the urlopen call will not release the connection 13:34:55 back into the pool once a response is received (but will release if 13:34:55 you read the entire contents of the response such as when 13:34:55 `preload_content=True`). This is useful if you're not preloading 13:34:55 the response's content immediately. You will need to call 13:34:55 ``r.release_conn()`` on the response ``r`` to return the connection 13:34:55 back into the pool. If None, it takes the value of ``preload_content`` 13:34:55 which defaults to ``True``. 13:34:55 13:34:55 :param bool chunked: 13:34:55 If True, urllib3 will send the body using chunked transfer 13:34:55 encoding. Otherwise, urllib3 will send the body using the standard 13:34:55 content-length form. Defaults to False. 13:34:55 13:34:55 :param int body_pos: 13:34:55 Position to seek to in file-like body in the event of a retry or 13:34:55 redirect. Typically this won't need to be set because urllib3 will 13:34:55 auto-populate the value when needed. 13:34:55 """ 13:34:55 parsed_url = parse_url(url) 13:34:55 destination_scheme = parsed_url.scheme 13:34:55 13:34:55 if headers is None: 13:34:55 headers = self.headers 13:34:55 13:34:55 if not isinstance(retries, Retry): 13:34:55 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 13:34:55 13:34:55 if release_conn is None: 13:34:55 release_conn = preload_content 13:34:55 13:34:55 # Check host 13:34:55 if assert_same_host and not self.is_same_host(url): 13:34:55 raise HostChangedError(self, url, retries) 13:34:55 13:34:55 # Ensure that the URL we're connecting to is properly encoded 13:34:55 if url.startswith("/"): 13:34:55 url = to_str(_encode_target(url)) 13:34:55 else: 13:34:55 url = to_str(parsed_url.url) 13:34:55 13:34:55 conn = None 13:34:55 13:34:55 # Track whether `conn` needs to be released before 13:34:55 # returning/raising/recursing. Update this variable if necessary, and 13:34:55 # leave `release_conn` constant throughout the function. That way, if 13:34:55 # the function recurses, the original value of `release_conn` will be 13:34:55 # passed down into the recursive call, and its value will be respected. 13:34:55 # 13:34:55 # See issue #651 [1] for details. 13:34:55 # 13:34:55 # [1] 13:34:55 release_this_conn = release_conn 13:34:55 13:34:55 http_tunnel_required = connection_requires_http_tunnel( 13:34:55 self.proxy, self.proxy_config, destination_scheme 13:34:55 ) 13:34:55 13:34:55 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 13:34:55 # have to copy the headers dict so we can safely change it without those 13:34:55 # changes being reflected in anyone else's copy. 13:34:55 if not http_tunnel_required: 13:34:55 headers = headers.copy() # type: ignore[attr-defined] 13:34:55 headers.update(self.proxy_headers) # type: ignore[union-attr] 13:34:55 13:34:55 # Must keep the exception bound to a separate variable or else Python 3 13:34:55 # complains about UnboundLocalError. 13:34:55 err = None 13:34:55 13:34:55 # Keep track of whether we cleanly exited the except block. This 13:34:55 # ensures we do proper cleanup in finally. 13:34:55 clean_exit = False 13:34:55 13:34:55 # Rewind body position, if needed. Record current position 13:34:55 # for future rewinds in the event of a redirect/retry. 13:34:55 body_pos = set_file_position(body, body_pos) 13:34:55 13:34:55 try: 13:34:55 # Request a connection from the queue. 13:34:55 timeout_obj = self._get_timeout(timeout) 13:34:55 conn = self._get_conn(timeout=pool_timeout) 13:34:55 13:34:55 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 13:34:55 13:34:55 # Is this a closed/new connection that requires CONNECT tunnelling? 13:34:55 if self.proxy is not None and http_tunnel_required and conn.is_closed: 13:34:55 try: 13:34:55 self._prepare_proxy(conn) 13:34:55 except (BaseSSLError, OSError, SocketTimeout) as e: 13:34:55 self._raise_timeout( 13:34:55 err=e, url=self.proxy.url, timeout_value=conn.timeout 13:34:55 ) 13:34:55 raise 13:34:55 13:34:55 # If we're going to release the connection in ``finally:``, then 13:34:55 # the response doesn't need to know about the connection. Otherwise 13:34:55 # it will also try to release it and we'll have a double-release 13:34:55 # mess. 13:34:55 response_conn = conn if not release_conn else None 13:34:55 13:34:55 # Make the request on the HTTPConnection object 13:34:55 > response = self._make_request( 13:34:55 conn, 13:34:55 method, 13:34:55 url, 13:34:55 timeout=timeout_obj, 13:34:55 body=body, 13:34:55 headers=headers, 13:34:55 chunked=chunked, 13:34:55 retries=retries, 13:34:55 response_conn=response_conn, 13:34:55 preload_content=preload_content, 13:34:55 decode_content=decode_content, 13:34:55 **response_kw, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 13:34:55 conn.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 13:34:55 self.endheaders() 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 13:34:55 self._send_output(message_body, encode_chunked=encode_chunked) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 13:34:55 self.send(msg) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 13:34:55 self.connect() 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 13:34:55 self.sock = self._new_conn() 13:34:55 ^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 except socket.gaierror as e: 13:34:55 raise NameResolutionError(self.host, self, e) from e 13:34:55 except SocketTimeout as e: 13:34:55 raise ConnectTimeoutError( 13:34:55 self, 13:34:55 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 13:34:55 ) from e 13:34:55 13:34:55 except OSError as e: 13:34:55 > raise NewConnectionError( 13:34:55 self, f"Failed to establish a new connection: {e}" 13:34:55 ) from e 13:34:55 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 > resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 13:34:55 retries = retries.increment( 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 method = 'DELETE' 13:34:55 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01' 13:34:55 response = None 13:34:55 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 13:34:55 _pool = 13:34:55 _stacktrace = 13:34:55 13:34:55 def increment( 13:34:55 self, 13:34:55 method: str | None = None, 13:34:55 url: str | None = None, 13:34:55 response: BaseHTTPResponse | None = None, 13:34:55 error: Exception | None = None, 13:34:55 _pool: ConnectionPool | None = None, 13:34:55 _stacktrace: TracebackType | None = None, 13:34:55 ) -> Self: 13:34:55 """Return a new Retry object with incremented retry counters. 13:34:55 13:34:55 :param response: A response object, or None, if the server did not 13:34:55 return a response. 13:34:55 :type response: :class:`~urllib3.response.BaseHTTPResponse` 13:34:55 :param Exception error: An error encountered during the request, or 13:34:55 None if the response was received successfully. 13:34:55 13:34:55 :return: A new ``Retry`` object. 13:34:55 """ 13:34:55 if self.total is False and error: 13:34:55 # Disabled, indicate to re-raise the error. 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 13:34:55 total = self.total 13:34:55 if total is not None: 13:34:55 total -= 1 13:34:55 13:34:55 connect = self.connect 13:34:55 read = self.read 13:34:55 redirect = self.redirect 13:34:55 status_count = self.status 13:34:55 other = self.other 13:34:55 cause = "unknown" 13:34:55 status = None 13:34:55 redirect_location = None 13:34:55 13:34:55 if error and self._is_connection_error(error): 13:34:55 # Connect retry? 13:34:55 if connect is False: 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif connect is not None: 13:34:55 connect -= 1 13:34:55 13:34:55 elif error and self._is_read_error(error): 13:34:55 # Read retry? 13:34:55 if read is False or method is None or not self._is_method_retryable(method): 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif read is not None: 13:34:55 read -= 1 13:34:55 13:34:55 elif error: 13:34:55 # Other retry? 13:34:55 if other is not None: 13:34:55 other -= 1 13:34:55 13:34:55 elif response and response.get_redirect_location(): 13:34:55 # Redirect retry? 13:34:55 if redirect is not None: 13:34:55 redirect -= 1 13:34:55 cause = "too many redirects" 13:34:55 response_redirect_location = response.get_redirect_location() 13:34:55 if response_redirect_location: 13:34:55 redirect_location = response_redirect_location 13:34:55 status = response.status 13:34:55 13:34:55 else: 13:34:55 # Incrementing because of a server error like a 500 in 13:34:55 # status_forcelist and the given method is in the allowed_methods 13:34:55 cause = ResponseError.GENERIC_ERROR 13:34:55 if response and response.status: 13:34:55 if status_count is not None: 13:34:55 status_count -= 1 13:34:55 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 13:34:55 status = response.status 13:34:55 13:34:55 history = self.history + ( 13:34:55 RequestHistory(method, url, error, status, redirect_location), 13:34:55 ) 13:34:55 13:34:55 new_retry = self.new( 13:34:55 total=total, 13:34:55 connect=connect, 13:34:55 read=read, 13:34:55 redirect=redirect, 13:34:55 status=status_count, 13:34:55 other=other, 13:34:55 history=history, 13:34:55 ) 13:34:55 13:34:55 if new_retry.is_exhausted(): 13:34:55 reason = error or ResponseError(cause) 13:34:55 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 13:34:55 13:34:55 During handling of the above exception, another exception occurred: 13:34:55 13:34:55 self = 13:34:55 13:34:55 def test_16_xpdr_device_disconnection(self): 13:34:55 > response = test_utils.unmount_device("XPDRA01") 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 13:34:55 transportpce_tests/1.2.1/test01_portmapping.py:193: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 transportpce_tests/common/test_utils.py:398: in unmount_device 13:34:55 response = delete_request(url[RESTCONF_VERSION].format('{}', node)) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 transportpce_tests/common/test_utils.py:134: in delete_request 13:34:55 return requests.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 13:34:55 return session.request(method=method, url=url, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 13:34:55 resp = self.send(prep, **send_kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 13:34:55 r = adapter.send(request, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 except (ProtocolError, OSError) as err: 13:34:55 raise ConnectionError(err, request=request) 13:34:55 13:34:55 except MaxRetryError as e: 13:34:55 if isinstance(e.reason, ConnectTimeoutError): 13:34:55 # TODO: Remove this in 3.0.0: see #2811 13:34:55 if not isinstance(e.reason, NewConnectionError): 13:34:55 raise ConnectTimeout(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, ResponseError): 13:34:55 raise RetryError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _ProxyError): 13:34:55 raise ProxyError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _SSLError): 13:34:55 # This branch is for urllib3 v1.22 and later. 13:34:55 raise SSLError(e, request=request) 13:34:55 13:34:55 > raise ConnectionError(e, request=request) 13:34:55 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 13:34:55 ----------------------------- Captured stdout call ----------------------------- 13:34:55 execution of test_16_xpdr_device_disconnection 13:34:55 _________ TestTransportPCEPortmapping.test_17_xpdr_device_disconnected _________ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 > sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 13:34:55 raise err 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 address = ('localhost', 8191), timeout = 30, source_address = None 13:34:55 socket_options = [(6, 1, 1)] 13:34:55 13:34:55 def create_connection( 13:34:55 address: tuple[str, int], 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 source_address: tuple[str, int] | None = None, 13:34:55 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 13:34:55 ) -> socket.socket: 13:34:55 """Connect to *address* and return the socket object. 13:34:55 13:34:55 Convenience function. Connect to *address* (a 2-tuple ``(host, 13:34:55 port)``) and return the socket object. Passing the optional 13:34:55 *timeout* parameter will set the timeout on the socket instance 13:34:55 before attempting to connect. If no *timeout* is supplied, the 13:34:55 global default timeout setting returned by :func:`socket.getdefaulttimeout` 13:34:55 is used. If *source_address* is set it must be a tuple of (host, port) 13:34:55 for the socket to bind as a source address before making the connection. 13:34:55 An host of '' or port 0 tells the OS to use the default. 13:34:55 """ 13:34:55 13:34:55 host, port = address 13:34:55 if host.startswith("["): 13:34:55 host = host.strip("[]") 13:34:55 err = None 13:34:55 13:34:55 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 13:34:55 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 13:34:55 # The original create_connection function always returns all records. 13:34:55 family = allowed_gai_family() 13:34:55 13:34:55 try: 13:34:55 host.encode("idna") 13:34:55 except UnicodeError: 13:34:55 raise LocationParseError(f"'{host}', label empty or too long") from None 13:34:55 13:34:55 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 13:34:55 af, socktype, proto, canonname, sa = res 13:34:55 sock = None 13:34:55 try: 13:34:55 sock = socket.socket(af, socktype, proto) 13:34:55 13:34:55 # If provided, set socket level options before connecting. 13:34:55 _set_socket_options(sock, socket_options) 13:34:55 13:34:55 if timeout is not _DEFAULT_TIMEOUT: 13:34:55 sock.settimeout(timeout) 13:34:55 if source_address: 13:34:55 sock.bind(source_address) 13:34:55 > sock.connect(sa) 13:34:55 E ConnectionRefusedError: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig' 13:34:55 body = None 13:34:55 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 13:34:55 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 redirect = False, assert_same_host = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 13:34:55 release_conn = False, chunked = False, body_pos = None, preload_content = False 13:34:55 decode_content = False, response_kw = {} 13:34:55 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01', query='content=nonconfig', fragment=None) 13:34:55 destination_scheme = None, conn = None, release_this_conn = True 13:34:55 http_tunnel_required = False, err = None, clean_exit = False 13:34:55 13:34:55 def urlopen( # type: ignore[override] 13:34:55 self, 13:34:55 method: str, 13:34:55 url: str, 13:34:55 body: _TYPE_BODY | None = None, 13:34:55 headers: typing.Mapping[str, str] | None = None, 13:34:55 retries: Retry | bool | int | None = None, 13:34:55 redirect: bool = True, 13:34:55 assert_same_host: bool = True, 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 pool_timeout: int | None = None, 13:34:55 release_conn: bool | None = None, 13:34:55 chunked: bool = False, 13:34:55 body_pos: _TYPE_BODY_POSITION | None = None, 13:34:55 preload_content: bool = True, 13:34:55 decode_content: bool = True, 13:34:55 **response_kw: typing.Any, 13:34:55 ) -> BaseHTTPResponse: 13:34:55 """ 13:34:55 Get a connection from the pool and perform an HTTP request. This is the 13:34:55 lowest level call for making a request, so you'll need to specify all 13:34:55 the raw details. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 More commonly, it's appropriate to use a convenience method 13:34:55 such as :meth:`request`. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 `release_conn` will only behave as expected if 13:34:55 `preload_content=False` because we want to make 13:34:55 `preload_content=False` the default behaviour someday soon without 13:34:55 breaking backwards compatibility. 13:34:55 13:34:55 :param method: 13:34:55 HTTP request method (such as GET, POST, PUT, etc.) 13:34:55 13:34:55 :param url: 13:34:55 The URL to perform the request on. 13:34:55 13:34:55 :param body: 13:34:55 Data to send in the request body, either :class:`str`, :class:`bytes`, 13:34:55 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 13:34:55 13:34:55 :param headers: 13:34:55 Dictionary of custom headers to send, such as User-Agent, 13:34:55 If-None-Match, etc. If None, pool headers are used. If provided, 13:34:55 these headers completely replace any pool-specific headers. 13:34:55 13:34:55 :param retries: 13:34:55 Configure the number of retries to allow before raising a 13:34:55 :class:`~urllib3.exceptions.MaxRetryError` exception. 13:34:55 13:34:55 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 13:34:55 :class:`~urllib3.util.retry.Retry` object for fine-grained control 13:34:55 over different types of retries. 13:34:55 Pass an integer number to retry connection errors that many times, 13:34:55 but no other types of errors. Pass zero to never retry. 13:34:55 13:34:55 If ``False``, then retries are disabled and any exception is raised 13:34:55 immediately. Also, instead of raising a MaxRetryError on redirects, 13:34:55 the redirect response will be returned. 13:34:55 13:34:55 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 13:34:55 13:34:55 :param redirect: 13:34:55 If True, automatically handle redirects (status codes 301, 302, 13:34:55 303, 307, 308). Each redirect counts as a retry. Disabling retries 13:34:55 will disable redirect, too. 13:34:55 13:34:55 :param assert_same_host: 13:34:55 If ``True``, will make sure that the host of the pool requests is 13:34:55 consistent else will raise HostChangedError. When ``False``, you can 13:34:55 use the pool on an HTTP proxy and request foreign hosts. 13:34:55 13:34:55 :param timeout: 13:34:55 If specified, overrides the default timeout for this one 13:34:55 request. It may be a float (in seconds) or an instance of 13:34:55 :class:`urllib3.util.Timeout`. 13:34:55 13:34:55 :param pool_timeout: 13:34:55 If set and the pool is set to block=True, then this method will 13:34:55 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 13:34:55 connection is available within the time period. 13:34:55 13:34:55 :param bool preload_content: 13:34:55 If True, the response's body will be preloaded into memory. 13:34:55 13:34:55 :param bool decode_content: 13:34:55 If True, will attempt to decode the body based on the 13:34:55 'content-encoding' header. 13:34:55 13:34:55 :param release_conn: 13:34:55 If False, then the urlopen call will not release the connection 13:34:55 back into the pool once a response is received (but will release if 13:34:55 you read the entire contents of the response such as when 13:34:55 `preload_content=True`). This is useful if you're not preloading 13:34:55 the response's content immediately. You will need to call 13:34:55 ``r.release_conn()`` on the response ``r`` to return the connection 13:34:55 back into the pool. If None, it takes the value of ``preload_content`` 13:34:55 which defaults to ``True``. 13:34:55 13:34:55 :param bool chunked: 13:34:55 If True, urllib3 will send the body using chunked transfer 13:34:55 encoding. Otherwise, urllib3 will send the body using the standard 13:34:55 content-length form. Defaults to False. 13:34:55 13:34:55 :param int body_pos: 13:34:55 Position to seek to in file-like body in the event of a retry or 13:34:55 redirect. Typically this won't need to be set because urllib3 will 13:34:55 auto-populate the value when needed. 13:34:55 """ 13:34:55 parsed_url = parse_url(url) 13:34:55 destination_scheme = parsed_url.scheme 13:34:55 13:34:55 if headers is None: 13:34:55 headers = self.headers 13:34:55 13:34:55 if not isinstance(retries, Retry): 13:34:55 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 13:34:55 13:34:55 if release_conn is None: 13:34:55 release_conn = preload_content 13:34:55 13:34:55 # Check host 13:34:55 if assert_same_host and not self.is_same_host(url): 13:34:55 raise HostChangedError(self, url, retries) 13:34:55 13:34:55 # Ensure that the URL we're connecting to is properly encoded 13:34:55 if url.startswith("/"): 13:34:55 url = to_str(_encode_target(url)) 13:34:55 else: 13:34:55 url = to_str(parsed_url.url) 13:34:55 13:34:55 conn = None 13:34:55 13:34:55 # Track whether `conn` needs to be released before 13:34:55 # returning/raising/recursing. Update this variable if necessary, and 13:34:55 # leave `release_conn` constant throughout the function. That way, if 13:34:55 # the function recurses, the original value of `release_conn` will be 13:34:55 # passed down into the recursive call, and its value will be respected. 13:34:55 # 13:34:55 # See issue #651 [1] for details. 13:34:55 # 13:34:55 # [1] 13:34:55 release_this_conn = release_conn 13:34:55 13:34:55 http_tunnel_required = connection_requires_http_tunnel( 13:34:55 self.proxy, self.proxy_config, destination_scheme 13:34:55 ) 13:34:55 13:34:55 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 13:34:55 # have to copy the headers dict so we can safely change it without those 13:34:55 # changes being reflected in anyone else's copy. 13:34:55 if not http_tunnel_required: 13:34:55 headers = headers.copy() # type: ignore[attr-defined] 13:34:55 headers.update(self.proxy_headers) # type: ignore[union-attr] 13:34:55 13:34:55 # Must keep the exception bound to a separate variable or else Python 3 13:34:55 # complains about UnboundLocalError. 13:34:55 err = None 13:34:55 13:34:55 # Keep track of whether we cleanly exited the except block. This 13:34:55 # ensures we do proper cleanup in finally. 13:34:55 clean_exit = False 13:34:55 13:34:55 # Rewind body position, if needed. Record current position 13:34:55 # for future rewinds in the event of a redirect/retry. 13:34:55 body_pos = set_file_position(body, body_pos) 13:34:55 13:34:55 try: 13:34:55 # Request a connection from the queue. 13:34:55 timeout_obj = self._get_timeout(timeout) 13:34:55 conn = self._get_conn(timeout=pool_timeout) 13:34:55 13:34:55 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 13:34:55 13:34:55 # Is this a closed/new connection that requires CONNECT tunnelling? 13:34:55 if self.proxy is not None and http_tunnel_required and conn.is_closed: 13:34:55 try: 13:34:55 self._prepare_proxy(conn) 13:34:55 except (BaseSSLError, OSError, SocketTimeout) as e: 13:34:55 self._raise_timeout( 13:34:55 err=e, url=self.proxy.url, timeout_value=conn.timeout 13:34:55 ) 13:34:55 raise 13:34:55 13:34:55 # If we're going to release the connection in ``finally:``, then 13:34:55 # the response doesn't need to know about the connection. Otherwise 13:34:55 # it will also try to release it and we'll have a double-release 13:34:55 # mess. 13:34:55 response_conn = conn if not release_conn else None 13:34:55 13:34:55 # Make the request on the HTTPConnection object 13:34:55 > response = self._make_request( 13:34:55 conn, 13:34:55 method, 13:34:55 url, 13:34:55 timeout=timeout_obj, 13:34:55 body=body, 13:34:55 headers=headers, 13:34:55 chunked=chunked, 13:34:55 retries=retries, 13:34:55 response_conn=response_conn, 13:34:55 preload_content=preload_content, 13:34:55 decode_content=decode_content, 13:34:55 **response_kw, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 13:34:55 conn.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 13:34:55 self.endheaders() 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 13:34:55 self._send_output(message_body, encode_chunked=encode_chunked) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 13:34:55 self.send(msg) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 13:34:55 self.connect() 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 13:34:55 self.sock = self._new_conn() 13:34:55 ^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 except socket.gaierror as e: 13:34:55 raise NameResolutionError(self.host, self, e) from e 13:34:55 except SocketTimeout as e: 13:34:55 raise ConnectTimeoutError( 13:34:55 self, 13:34:55 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 13:34:55 ) from e 13:34:55 13:34:55 except OSError as e: 13:34:55 > raise NewConnectionError( 13:34:55 self, f"Failed to establish a new connection: {e}" 13:34:55 ) from e 13:34:55 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 > resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 13:34:55 retries = retries.increment( 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig' 13:34:55 response = None 13:34:55 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 13:34:55 _pool = 13:34:55 _stacktrace = 13:34:55 13:34:55 def increment( 13:34:55 self, 13:34:55 method: str | None = None, 13:34:55 url: str | None = None, 13:34:55 response: BaseHTTPResponse | None = None, 13:34:55 error: Exception | None = None, 13:34:55 _pool: ConnectionPool | None = None, 13:34:55 _stacktrace: TracebackType | None = None, 13:34:55 ) -> Self: 13:34:55 """Return a new Retry object with incremented retry counters. 13:34:55 13:34:55 :param response: A response object, or None, if the server did not 13:34:55 return a response. 13:34:55 :type response: :class:`~urllib3.response.BaseHTTPResponse` 13:34:55 :param Exception error: An error encountered during the request, or 13:34:55 None if the response was received successfully. 13:34:55 13:34:55 :return: A new ``Retry`` object. 13:34:55 """ 13:34:55 if self.total is False and error: 13:34:55 # Disabled, indicate to re-raise the error. 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 13:34:55 total = self.total 13:34:55 if total is not None: 13:34:55 total -= 1 13:34:55 13:34:55 connect = self.connect 13:34:55 read = self.read 13:34:55 redirect = self.redirect 13:34:55 status_count = self.status 13:34:55 other = self.other 13:34:55 cause = "unknown" 13:34:55 status = None 13:34:55 redirect_location = None 13:34:55 13:34:55 if error and self._is_connection_error(error): 13:34:55 # Connect retry? 13:34:55 if connect is False: 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif connect is not None: 13:34:55 connect -= 1 13:34:55 13:34:55 elif error and self._is_read_error(error): 13:34:55 # Read retry? 13:34:55 if read is False or method is None or not self._is_method_retryable(method): 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif read is not None: 13:34:55 read -= 1 13:34:55 13:34:55 elif error: 13:34:55 # Other retry? 13:34:55 if other is not None: 13:34:55 other -= 1 13:34:55 13:34:55 elif response and response.get_redirect_location(): 13:34:55 # Redirect retry? 13:34:55 if redirect is not None: 13:34:55 redirect -= 1 13:34:55 cause = "too many redirects" 13:34:55 response_redirect_location = response.get_redirect_location() 13:34:55 if response_redirect_location: 13:34:55 redirect_location = response_redirect_location 13:34:55 status = response.status 13:34:55 13:34:55 else: 13:34:55 # Incrementing because of a server error like a 500 in 13:34:55 # status_forcelist and the given method is in the allowed_methods 13:34:55 cause = ResponseError.GENERIC_ERROR 13:34:55 if response and response.status: 13:34:55 if status_count is not None: 13:34:55 status_count -= 1 13:34:55 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 13:34:55 status = response.status 13:34:55 13:34:55 history = self.history + ( 13:34:55 RequestHistory(method, url, error, status, redirect_location), 13:34:55 ) 13:34:55 13:34:55 new_retry = self.new( 13:34:55 total=total, 13:34:55 connect=connect, 13:34:55 read=read, 13:34:55 redirect=redirect, 13:34:55 status=status_count, 13:34:55 other=other, 13:34:55 history=history, 13:34:55 ) 13:34:55 13:34:55 if new_retry.is_exhausted(): 13:34:55 reason = error or ResponseError(cause) 13:34:55 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 13:34:55 13:34:55 During handling of the above exception, another exception occurred: 13:34:55 13:34:55 self = 13:34:55 13:34:55 def test_17_xpdr_device_disconnected(self): 13:34:55 > response = test_utils.check_device_connection("XPDRA01") 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 13:34:55 transportpce_tests/1.2.1/test01_portmapping.py:197: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 transportpce_tests/common/test_utils.py:409: in check_device_connection 13:34:55 response = get_request(url[RESTCONF_VERSION].format('{}', node)) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 transportpce_tests/common/test_utils.py:117: in get_request 13:34:55 return requests.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 13:34:55 return session.request(method=method, url=url, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 13:34:55 resp = self.send(prep, **send_kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 13:34:55 r = adapter.send(request, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 except (ProtocolError, OSError) as err: 13:34:55 raise ConnectionError(err, request=request) 13:34:55 13:34:55 except MaxRetryError as e: 13:34:55 if isinstance(e.reason, ConnectTimeoutError): 13:34:55 # TODO: Remove this in 3.0.0: see #2811 13:34:55 if not isinstance(e.reason, NewConnectionError): 13:34:55 raise ConnectTimeout(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, ResponseError): 13:34:55 raise RetryError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _ProxyError): 13:34:55 raise ProxyError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _SSLError): 13:34:55 # This branch is for urllib3 v1.22 and later. 13:34:55 raise SSLError(e, request=request) 13:34:55 13:34:55 > raise ConnectionError(e, request=request) 13:34:55 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 13:34:55 ----------------------------- Captured stdout call ----------------------------- 13:34:55 execution of test_17_xpdr_device_disconnected 13:34:55 ________ TestTransportPCEPortmapping.test_18_xpdr_device_not_connected _________ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 > sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 13:34:55 raise err 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 address = ('localhost', 8191), timeout = 30, source_address = None 13:34:55 socket_options = [(6, 1, 1)] 13:34:55 13:34:55 def create_connection( 13:34:55 address: tuple[str, int], 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 source_address: tuple[str, int] | None = None, 13:34:55 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 13:34:55 ) -> socket.socket: 13:34:55 """Connect to *address* and return the socket object. 13:34:55 13:34:55 Convenience function. Connect to *address* (a 2-tuple ``(host, 13:34:55 port)``) and return the socket object. Passing the optional 13:34:55 *timeout* parameter will set the timeout on the socket instance 13:34:55 before attempting to connect. If no *timeout* is supplied, the 13:34:55 global default timeout setting returned by :func:`socket.getdefaulttimeout` 13:34:55 is used. If *source_address* is set it must be a tuple of (host, port) 13:34:55 for the socket to bind as a source address before making the connection. 13:34:55 An host of '' or port 0 tells the OS to use the default. 13:34:55 """ 13:34:55 13:34:55 host, port = address 13:34:55 if host.startswith("["): 13:34:55 host = host.strip("[]") 13:34:55 err = None 13:34:55 13:34:55 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 13:34:55 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 13:34:55 # The original create_connection function always returns all records. 13:34:55 family = allowed_gai_family() 13:34:55 13:34:55 try: 13:34:55 host.encode("idna") 13:34:55 except UnicodeError: 13:34:55 raise LocationParseError(f"'{host}', label empty or too long") from None 13:34:55 13:34:55 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 13:34:55 af, socktype, proto, canonname, sa = res 13:34:55 sock = None 13:34:55 try: 13:34:55 sock = socket.socket(af, socktype, proto) 13:34:55 13:34:55 # If provided, set socket level options before connecting. 13:34:55 _set_socket_options(sock, socket_options) 13:34:55 13:34:55 if timeout is not _DEFAULT_TIMEOUT: 13:34:55 sock.settimeout(timeout) 13:34:55 if source_address: 13:34:55 sock.bind(source_address) 13:34:55 > sock.connect(sa) 13:34:55 E ConnectionRefusedError: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info' 13:34:55 body = None 13:34:55 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 13:34:55 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 redirect = False, assert_same_host = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 13:34:55 release_conn = False, chunked = False, body_pos = None, preload_content = False 13:34:55 decode_content = False, response_kw = {} 13:34:55 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info', query=None, fragment=None) 13:34:55 destination_scheme = None, conn = None, release_this_conn = True 13:34:55 http_tunnel_required = False, err = None, clean_exit = False 13:34:55 13:34:55 def urlopen( # type: ignore[override] 13:34:55 self, 13:34:55 method: str, 13:34:55 url: str, 13:34:55 body: _TYPE_BODY | None = None, 13:34:55 headers: typing.Mapping[str, str] | None = None, 13:34:55 retries: Retry | bool | int | None = None, 13:34:55 redirect: bool = True, 13:34:55 assert_same_host: bool = True, 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 pool_timeout: int | None = None, 13:34:55 release_conn: bool | None = None, 13:34:55 chunked: bool = False, 13:34:55 body_pos: _TYPE_BODY_POSITION | None = None, 13:34:55 preload_content: bool = True, 13:34:55 decode_content: bool = True, 13:34:55 **response_kw: typing.Any, 13:34:55 ) -> BaseHTTPResponse: 13:34:55 """ 13:34:55 Get a connection from the pool and perform an HTTP request. This is the 13:34:55 lowest level call for making a request, so you'll need to specify all 13:34:55 the raw details. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 More commonly, it's appropriate to use a convenience method 13:34:55 such as :meth:`request`. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 `release_conn` will only behave as expected if 13:34:55 `preload_content=False` because we want to make 13:34:55 `preload_content=False` the default behaviour someday soon without 13:34:55 breaking backwards compatibility. 13:34:55 13:34:55 :param method: 13:34:55 HTTP request method (such as GET, POST, PUT, etc.) 13:34:55 13:34:55 :param url: 13:34:55 The URL to perform the request on. 13:34:55 13:34:55 :param body: 13:34:55 Data to send in the request body, either :class:`str`, :class:`bytes`, 13:34:55 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 13:34:55 13:34:55 :param headers: 13:34:55 Dictionary of custom headers to send, such as User-Agent, 13:34:55 If-None-Match, etc. If None, pool headers are used. If provided, 13:34:55 these headers completely replace any pool-specific headers. 13:34:55 13:34:55 :param retries: 13:34:55 Configure the number of retries to allow before raising a 13:34:55 :class:`~urllib3.exceptions.MaxRetryError` exception. 13:34:55 13:34:55 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 13:34:55 :class:`~urllib3.util.retry.Retry` object for fine-grained control 13:34:55 over different types of retries. 13:34:55 Pass an integer number to retry connection errors that many times, 13:34:55 but no other types of errors. Pass zero to never retry. 13:34:55 13:34:55 If ``False``, then retries are disabled and any exception is raised 13:34:55 immediately. Also, instead of raising a MaxRetryError on redirects, 13:34:55 the redirect response will be returned. 13:34:55 13:34:55 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 13:34:55 13:34:55 :param redirect: 13:34:55 If True, automatically handle redirects (status codes 301, 302, 13:34:55 303, 307, 308). Each redirect counts as a retry. Disabling retries 13:34:55 will disable redirect, too. 13:34:55 13:34:55 :param assert_same_host: 13:34:55 If ``True``, will make sure that the host of the pool requests is 13:34:55 consistent else will raise HostChangedError. When ``False``, you can 13:34:55 use the pool on an HTTP proxy and request foreign hosts. 13:34:55 13:34:55 :param timeout: 13:34:55 If specified, overrides the default timeout for this one 13:34:55 request. It may be a float (in seconds) or an instance of 13:34:55 :class:`urllib3.util.Timeout`. 13:34:55 13:34:55 :param pool_timeout: 13:34:55 If set and the pool is set to block=True, then this method will 13:34:55 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 13:34:55 connection is available within the time period. 13:34:55 13:34:55 :param bool preload_content: 13:34:55 If True, the response's body will be preloaded into memory. 13:34:55 13:34:55 :param bool decode_content: 13:34:55 If True, will attempt to decode the body based on the 13:34:55 'content-encoding' header. 13:34:55 13:34:55 :param release_conn: 13:34:55 If False, then the urlopen call will not release the connection 13:34:55 back into the pool once a response is received (but will release if 13:34:55 you read the entire contents of the response such as when 13:34:55 `preload_content=True`). This is useful if you're not preloading 13:34:55 the response's content immediately. You will need to call 13:34:55 ``r.release_conn()`` on the response ``r`` to return the connection 13:34:55 back into the pool. If None, it takes the value of ``preload_content`` 13:34:55 which defaults to ``True``. 13:34:55 13:34:55 :param bool chunked: 13:34:55 If True, urllib3 will send the body using chunked transfer 13:34:55 encoding. Otherwise, urllib3 will send the body using the standard 13:34:55 content-length form. Defaults to False. 13:34:55 13:34:55 :param int body_pos: 13:34:55 Position to seek to in file-like body in the event of a retry or 13:34:55 redirect. Typically this won't need to be set because urllib3 will 13:34:55 auto-populate the value when needed. 13:34:55 """ 13:34:55 parsed_url = parse_url(url) 13:34:55 destination_scheme = parsed_url.scheme 13:34:55 13:34:55 if headers is None: 13:34:55 headers = self.headers 13:34:55 13:34:55 if not isinstance(retries, Retry): 13:34:55 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 13:34:55 13:34:55 if release_conn is None: 13:34:55 release_conn = preload_content 13:34:55 13:34:55 # Check host 13:34:55 if assert_same_host and not self.is_same_host(url): 13:34:55 raise HostChangedError(self, url, retries) 13:34:55 13:34:55 # Ensure that the URL we're connecting to is properly encoded 13:34:55 if url.startswith("/"): 13:34:55 url = to_str(_encode_target(url)) 13:34:55 else: 13:34:55 url = to_str(parsed_url.url) 13:34:55 13:34:55 conn = None 13:34:55 13:34:55 # Track whether `conn` needs to be released before 13:34:55 # returning/raising/recursing. Update this variable if necessary, and 13:34:55 # leave `release_conn` constant throughout the function. That way, if 13:34:55 # the function recurses, the original value of `release_conn` will be 13:34:55 # passed down into the recursive call, and its value will be respected. 13:34:55 # 13:34:55 # See issue #651 [1] for details. 13:34:55 # 13:34:55 # [1] 13:34:55 release_this_conn = release_conn 13:34:55 13:34:55 http_tunnel_required = connection_requires_http_tunnel( 13:34:55 self.proxy, self.proxy_config, destination_scheme 13:34:55 ) 13:34:55 13:34:55 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 13:34:55 # have to copy the headers dict so we can safely change it without those 13:34:55 # changes being reflected in anyone else's copy. 13:34:55 if not http_tunnel_required: 13:34:55 headers = headers.copy() # type: ignore[attr-defined] 13:34:55 headers.update(self.proxy_headers) # type: ignore[union-attr] 13:34:55 13:34:55 # Must keep the exception bound to a separate variable or else Python 3 13:34:55 # complains about UnboundLocalError. 13:34:55 err = None 13:34:55 13:34:55 # Keep track of whether we cleanly exited the except block. This 13:34:55 # ensures we do proper cleanup in finally. 13:34:55 clean_exit = False 13:34:55 13:34:55 # Rewind body position, if needed. Record current position 13:34:55 # for future rewinds in the event of a redirect/retry. 13:34:55 body_pos = set_file_position(body, body_pos) 13:34:55 13:34:55 try: 13:34:55 # Request a connection from the queue. 13:34:55 timeout_obj = self._get_timeout(timeout) 13:34:55 conn = self._get_conn(timeout=pool_timeout) 13:34:55 13:34:55 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 13:34:55 13:34:55 # Is this a closed/new connection that requires CONNECT tunnelling? 13:34:55 if self.proxy is not None and http_tunnel_required and conn.is_closed: 13:34:55 try: 13:34:55 self._prepare_proxy(conn) 13:34:55 except (BaseSSLError, OSError, SocketTimeout) as e: 13:34:55 self._raise_timeout( 13:34:55 err=e, url=self.proxy.url, timeout_value=conn.timeout 13:34:55 ) 13:34:55 raise 13:34:55 13:34:55 # If we're going to release the connection in ``finally:``, then 13:34:55 # the response doesn't need to know about the connection. Otherwise 13:34:55 # it will also try to release it and we'll have a double-release 13:34:55 # mess. 13:34:55 response_conn = conn if not release_conn else None 13:34:55 13:34:55 # Make the request on the HTTPConnection object 13:34:55 > response = self._make_request( 13:34:55 conn, 13:34:55 method, 13:34:55 url, 13:34:55 timeout=timeout_obj, 13:34:55 body=body, 13:34:55 headers=headers, 13:34:55 chunked=chunked, 13:34:55 retries=retries, 13:34:55 response_conn=response_conn, 13:34:55 preload_content=preload_content, 13:34:55 decode_content=decode_content, 13:34:55 **response_kw, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 13:34:55 conn.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 13:34:55 self.endheaders() 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 13:34:55 self._send_output(message_body, encode_chunked=encode_chunked) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 13:34:55 self.send(msg) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 13:34:55 self.connect() 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 13:34:55 self.sock = self._new_conn() 13:34:55 ^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 except socket.gaierror as e: 13:34:55 raise NameResolutionError(self.host, self, e) from e 13:34:55 except SocketTimeout as e: 13:34:55 raise ConnectTimeoutError( 13:34:55 self, 13:34:55 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 13:34:55 ) from e 13:34:55 13:34:55 except OSError as e: 13:34:55 > raise NewConnectionError( 13:34:55 self, f"Failed to establish a new connection: {e}" 13:34:55 ) from e 13:34:55 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 > resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 13:34:55 retries = retries.increment( 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info' 13:34:55 response = None 13:34:55 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 13:34:55 _pool = 13:34:55 _stacktrace = 13:34:55 13:34:55 def increment( 13:34:55 self, 13:34:55 method: str | None = None, 13:34:55 url: str | None = None, 13:34:55 response: BaseHTTPResponse | None = None, 13:34:55 error: Exception | None = None, 13:34:55 _pool: ConnectionPool | None = None, 13:34:55 _stacktrace: TracebackType | None = None, 13:34:55 ) -> Self: 13:34:55 """Return a new Retry object with incremented retry counters. 13:34:55 13:34:55 :param response: A response object, or None, if the server did not 13:34:55 return a response. 13:34:55 :type response: :class:`~urllib3.response.BaseHTTPResponse` 13:34:55 :param Exception error: An error encountered during the request, or 13:34:55 None if the response was received successfully. 13:34:55 13:34:55 :return: A new ``Retry`` object. 13:34:55 """ 13:34:55 if self.total is False and error: 13:34:55 # Disabled, indicate to re-raise the error. 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 13:34:55 total = self.total 13:34:55 if total is not None: 13:34:55 total -= 1 13:34:55 13:34:55 connect = self.connect 13:34:55 read = self.read 13:34:55 redirect = self.redirect 13:34:55 status_count = self.status 13:34:55 other = self.other 13:34:55 cause = "unknown" 13:34:55 status = None 13:34:55 redirect_location = None 13:34:55 13:34:55 if error and self._is_connection_error(error): 13:34:55 # Connect retry? 13:34:55 if connect is False: 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif connect is not None: 13:34:55 connect -= 1 13:34:55 13:34:55 elif error and self._is_read_error(error): 13:34:55 # Read retry? 13:34:55 if read is False or method is None or not self._is_method_retryable(method): 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif read is not None: 13:34:55 read -= 1 13:34:55 13:34:55 elif error: 13:34:55 # Other retry? 13:34:55 if other is not None: 13:34:55 other -= 1 13:34:55 13:34:55 elif response and response.get_redirect_location(): 13:34:55 # Redirect retry? 13:34:55 if redirect is not None: 13:34:55 redirect -= 1 13:34:55 cause = "too many redirects" 13:34:55 response_redirect_location = response.get_redirect_location() 13:34:55 if response_redirect_location: 13:34:55 redirect_location = response_redirect_location 13:34:55 status = response.status 13:34:55 13:34:55 else: 13:34:55 # Incrementing because of a server error like a 500 in 13:34:55 # status_forcelist and the given method is in the allowed_methods 13:34:55 cause = ResponseError.GENERIC_ERROR 13:34:55 if response and response.status: 13:34:55 if status_count is not None: 13:34:55 status_count -= 1 13:34:55 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 13:34:55 status = response.status 13:34:55 13:34:55 history = self.history + ( 13:34:55 RequestHistory(method, url, error, status, redirect_location), 13:34:55 ) 13:34:55 13:34:55 new_retry = self.new( 13:34:55 total=total, 13:34:55 connect=connect, 13:34:55 read=read, 13:34:55 redirect=redirect, 13:34:55 status=status_count, 13:34:55 other=other, 13:34:55 history=history, 13:34:55 ) 13:34:55 13:34:55 if new_retry.is_exhausted(): 13:34:55 reason = error or ResponseError(cause) 13:34:55 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 13:34:55 13:34:55 During handling of the above exception, another exception occurred: 13:34:55 13:34:55 self = 13:34:55 13:34:55 def test_18_xpdr_device_not_connected(self): 13:34:55 > response = test_utils.get_portmapping_node_attr("XPDRA01", "node-info", None) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 13:34:55 transportpce_tests/1.2.1/test01_portmapping.py:205: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 13:34:55 response = get_request(target_url) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 transportpce_tests/common/test_utils.py:117: in get_request 13:34:55 return requests.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 13:34:55 return session.request(method=method, url=url, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 13:34:55 resp = self.send(prep, **send_kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 13:34:55 r = adapter.send(request, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 except (ProtocolError, OSError) as err: 13:34:55 raise ConnectionError(err, request=request) 13:34:55 13:34:55 except MaxRetryError as e: 13:34:55 if isinstance(e.reason, ConnectTimeoutError): 13:34:55 # TODO: Remove this in 3.0.0: see #2811 13:34:55 if not isinstance(e.reason, NewConnectionError): 13:34:55 raise ConnectTimeout(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, ResponseError): 13:34:55 raise RetryError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _ProxyError): 13:34:55 raise ProxyError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _SSLError): 13:34:55 # This branch is for urllib3 v1.22 and later. 13:34:55 raise SSLError(e, request=request) 13:34:55 13:34:55 > raise ConnectionError(e, request=request) 13:34:55 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 13:34:55 ----------------------------- Captured stdout call ----------------------------- 13:34:55 execution of test_18_xpdr_device_not_connected 13:34:55 _________ TestTransportPCEPortmapping.test_19_rdm_device_disconnection _________ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 > sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 13:34:55 raise err 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 address = ('localhost', 8191), timeout = 30, source_address = None 13:34:55 socket_options = [(6, 1, 1)] 13:34:55 13:34:55 def create_connection( 13:34:55 address: tuple[str, int], 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 source_address: tuple[str, int] | None = None, 13:34:55 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 13:34:55 ) -> socket.socket: 13:34:55 """Connect to *address* and return the socket object. 13:34:55 13:34:55 Convenience function. Connect to *address* (a 2-tuple ``(host, 13:34:55 port)``) and return the socket object. Passing the optional 13:34:55 *timeout* parameter will set the timeout on the socket instance 13:34:55 before attempting to connect. If no *timeout* is supplied, the 13:34:55 global default timeout setting returned by :func:`socket.getdefaulttimeout` 13:34:55 is used. If *source_address* is set it must be a tuple of (host, port) 13:34:55 for the socket to bind as a source address before making the connection. 13:34:55 An host of '' or port 0 tells the OS to use the default. 13:34:55 """ 13:34:55 13:34:55 host, port = address 13:34:55 if host.startswith("["): 13:34:55 host = host.strip("[]") 13:34:55 err = None 13:34:55 13:34:55 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 13:34:55 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 13:34:55 # The original create_connection function always returns all records. 13:34:55 family = allowed_gai_family() 13:34:55 13:34:55 try: 13:34:55 host.encode("idna") 13:34:55 except UnicodeError: 13:34:55 raise LocationParseError(f"'{host}', label empty or too long") from None 13:34:55 13:34:55 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 13:34:55 af, socktype, proto, canonname, sa = res 13:34:55 sock = None 13:34:55 try: 13:34:55 sock = socket.socket(af, socktype, proto) 13:34:55 13:34:55 # If provided, set socket level options before connecting. 13:34:55 _set_socket_options(sock, socket_options) 13:34:55 13:34:55 if timeout is not _DEFAULT_TIMEOUT: 13:34:55 sock.settimeout(timeout) 13:34:55 if source_address: 13:34:55 sock.bind(source_address) 13:34:55 > sock.connect(sa) 13:34:55 E ConnectionRefusedError: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 method = 'DELETE' 13:34:55 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01' 13:34:55 body = None 13:34:55 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 13:34:55 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 redirect = False, assert_same_host = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 13:34:55 release_conn = False, chunked = False, body_pos = None, preload_content = False 13:34:55 decode_content = False, response_kw = {} 13:34:55 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01', query=None, fragment=None) 13:34:55 destination_scheme = None, conn = None, release_this_conn = True 13:34:55 http_tunnel_required = False, err = None, clean_exit = False 13:34:55 13:34:55 def urlopen( # type: ignore[override] 13:34:55 self, 13:34:55 method: str, 13:34:55 url: str, 13:34:55 body: _TYPE_BODY | None = None, 13:34:55 headers: typing.Mapping[str, str] | None = None, 13:34:55 retries: Retry | bool | int | None = None, 13:34:55 redirect: bool = True, 13:34:55 assert_same_host: bool = True, 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 pool_timeout: int | None = None, 13:34:55 release_conn: bool | None = None, 13:34:55 chunked: bool = False, 13:34:55 body_pos: _TYPE_BODY_POSITION | None = None, 13:34:55 preload_content: bool = True, 13:34:55 decode_content: bool = True, 13:34:55 **response_kw: typing.Any, 13:34:55 ) -> BaseHTTPResponse: 13:34:55 """ 13:34:55 Get a connection from the pool and perform an HTTP request. This is the 13:34:55 lowest level call for making a request, so you'll need to specify all 13:34:55 the raw details. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 More commonly, it's appropriate to use a convenience method 13:34:55 such as :meth:`request`. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 `release_conn` will only behave as expected if 13:34:55 `preload_content=False` because we want to make 13:34:55 `preload_content=False` the default behaviour someday soon without 13:34:55 breaking backwards compatibility. 13:34:55 13:34:55 :param method: 13:34:55 HTTP request method (such as GET, POST, PUT, etc.) 13:34:55 13:34:55 :param url: 13:34:55 The URL to perform the request on. 13:34:55 13:34:55 :param body: 13:34:55 Data to send in the request body, either :class:`str`, :class:`bytes`, 13:34:55 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 13:34:55 13:34:55 :param headers: 13:34:55 Dictionary of custom headers to send, such as User-Agent, 13:34:55 If-None-Match, etc. If None, pool headers are used. If provided, 13:34:55 these headers completely replace any pool-specific headers. 13:34:55 13:34:55 :param retries: 13:34:55 Configure the number of retries to allow before raising a 13:34:55 :class:`~urllib3.exceptions.MaxRetryError` exception. 13:34:55 13:34:55 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 13:34:55 :class:`~urllib3.util.retry.Retry` object for fine-grained control 13:34:55 over different types of retries. 13:34:55 Pass an integer number to retry connection errors that many times, 13:34:55 but no other types of errors. Pass zero to never retry. 13:34:55 13:34:55 If ``False``, then retries are disabled and any exception is raised 13:34:55 immediately. Also, instead of raising a MaxRetryError on redirects, 13:34:55 the redirect response will be returned. 13:34:55 13:34:55 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 13:34:55 13:34:55 :param redirect: 13:34:55 If True, automatically handle redirects (status codes 301, 302, 13:34:55 303, 307, 308). Each redirect counts as a retry. Disabling retries 13:34:55 will disable redirect, too. 13:34:55 13:34:55 :param assert_same_host: 13:34:55 If ``True``, will make sure that the host of the pool requests is 13:34:55 consistent else will raise HostChangedError. When ``False``, you can 13:34:55 use the pool on an HTTP proxy and request foreign hosts. 13:34:55 13:34:55 :param timeout: 13:34:55 If specified, overrides the default timeout for this one 13:34:55 request. It may be a float (in seconds) or an instance of 13:34:55 :class:`urllib3.util.Timeout`. 13:34:55 13:34:55 :param pool_timeout: 13:34:55 If set and the pool is set to block=True, then this method will 13:34:55 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 13:34:55 connection is available within the time period. 13:34:55 13:34:55 :param bool preload_content: 13:34:55 If True, the response's body will be preloaded into memory. 13:34:55 13:34:55 :param bool decode_content: 13:34:55 If True, will attempt to decode the body based on the 13:34:55 'content-encoding' header. 13:34:55 13:34:55 :param release_conn: 13:34:55 If False, then the urlopen call will not release the connection 13:34:55 back into the pool once a response is received (but will release if 13:34:55 you read the entire contents of the response such as when 13:34:55 `preload_content=True`). This is useful if you're not preloading 13:34:55 the response's content immediately. You will need to call 13:34:55 ``r.release_conn()`` on the response ``r`` to return the connection 13:34:55 back into the pool. If None, it takes the value of ``preload_content`` 13:34:55 which defaults to ``True``. 13:34:55 13:34:55 :param bool chunked: 13:34:55 If True, urllib3 will send the body using chunked transfer 13:34:55 encoding. Otherwise, urllib3 will send the body using the standard 13:34:55 content-length form. Defaults to False. 13:34:55 13:34:55 :param int body_pos: 13:34:55 Position to seek to in file-like body in the event of a retry or 13:34:55 redirect. Typically this won't need to be set because urllib3 will 13:34:55 auto-populate the value when needed. 13:34:55 """ 13:34:55 parsed_url = parse_url(url) 13:34:55 destination_scheme = parsed_url.scheme 13:34:55 13:34:55 if headers is None: 13:34:55 headers = self.headers 13:34:55 13:34:55 if not isinstance(retries, Retry): 13:34:55 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 13:34:55 13:34:55 if release_conn is None: 13:34:55 release_conn = preload_content 13:34:55 13:34:55 # Check host 13:34:55 if assert_same_host and not self.is_same_host(url): 13:34:55 raise HostChangedError(self, url, retries) 13:34:55 13:34:55 # Ensure that the URL we're connecting to is properly encoded 13:34:55 if url.startswith("/"): 13:34:55 url = to_str(_encode_target(url)) 13:34:55 else: 13:34:55 url = to_str(parsed_url.url) 13:34:55 13:34:55 conn = None 13:34:55 13:34:55 # Track whether `conn` needs to be released before 13:34:55 # returning/raising/recursing. Update this variable if necessary, and 13:34:55 # leave `release_conn` constant throughout the function. That way, if 13:34:55 # the function recurses, the original value of `release_conn` will be 13:34:55 # passed down into the recursive call, and its value will be respected. 13:34:55 # 13:34:55 # See issue #651 [1] for details. 13:34:55 # 13:34:55 # [1] 13:34:55 release_this_conn = release_conn 13:34:55 13:34:55 http_tunnel_required = connection_requires_http_tunnel( 13:34:55 self.proxy, self.proxy_config, destination_scheme 13:34:55 ) 13:34:55 13:34:55 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 13:34:55 # have to copy the headers dict so we can safely change it without those 13:34:55 # changes being reflected in anyone else's copy. 13:34:55 if not http_tunnel_required: 13:34:55 headers = headers.copy() # type: ignore[attr-defined] 13:34:55 headers.update(self.proxy_headers) # type: ignore[union-attr] 13:34:55 13:34:55 # Must keep the exception bound to a separate variable or else Python 3 13:34:55 # complains about UnboundLocalError. 13:34:55 err = None 13:34:55 13:34:55 # Keep track of whether we cleanly exited the except block. This 13:34:55 # ensures we do proper cleanup in finally. 13:34:55 clean_exit = False 13:34:55 13:34:55 # Rewind body position, if needed. Record current position 13:34:55 # for future rewinds in the event of a redirect/retry. 13:34:55 body_pos = set_file_position(body, body_pos) 13:34:55 13:34:55 try: 13:34:55 # Request a connection from the queue. 13:34:55 timeout_obj = self._get_timeout(timeout) 13:34:55 conn = self._get_conn(timeout=pool_timeout) 13:34:55 13:34:55 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 13:34:55 13:34:55 # Is this a closed/new connection that requires CONNECT tunnelling? 13:34:55 if self.proxy is not None and http_tunnel_required and conn.is_closed: 13:34:55 try: 13:34:55 self._prepare_proxy(conn) 13:34:55 except (BaseSSLError, OSError, SocketTimeout) as e: 13:34:55 self._raise_timeout( 13:34:55 err=e, url=self.proxy.url, timeout_value=conn.timeout 13:34:55 ) 13:34:55 raise 13:34:55 13:34:55 # If we're going to release the connection in ``finally:``, then 13:34:55 # the response doesn't need to know about the connection. Otherwise 13:34:55 # it will also try to release it and we'll have a double-release 13:34:55 # mess. 13:34:55 response_conn = conn if not release_conn else None 13:34:55 13:34:55 # Make the request on the HTTPConnection object 13:34:55 > response = self._make_request( 13:34:55 conn, 13:34:55 method, 13:34:55 url, 13:34:55 timeout=timeout_obj, 13:34:55 body=body, 13:34:55 headers=headers, 13:34:55 chunked=chunked, 13:34:55 retries=retries, 13:34:55 response_conn=response_conn, 13:34:55 preload_content=preload_content, 13:34:55 decode_content=decode_content, 13:34:55 **response_kw, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 13:34:55 conn.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 13:34:55 self.endheaders() 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 13:34:55 self._send_output(message_body, encode_chunked=encode_chunked) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 13:34:55 self.send(msg) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 13:34:55 self.connect() 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 13:34:55 self.sock = self._new_conn() 13:34:55 ^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 except socket.gaierror as e: 13:34:55 raise NameResolutionError(self.host, self, e) from e 13:34:55 except SocketTimeout as e: 13:34:55 raise ConnectTimeoutError( 13:34:55 self, 13:34:55 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 13:34:55 ) from e 13:34:55 13:34:55 except OSError as e: 13:34:55 > raise NewConnectionError( 13:34:55 self, f"Failed to establish a new connection: {e}" 13:34:55 ) from e 13:34:55 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 > resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 13:34:55 retries = retries.increment( 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 method = 'DELETE' 13:34:55 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01' 13:34:55 response = None 13:34:55 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 13:34:55 _pool = 13:34:55 _stacktrace = 13:34:55 13:34:55 def increment( 13:34:55 self, 13:34:55 method: str | None = None, 13:34:55 url: str | None = None, 13:34:55 response: BaseHTTPResponse | None = None, 13:34:55 error: Exception | None = None, 13:34:55 _pool: ConnectionPool | None = None, 13:34:55 _stacktrace: TracebackType | None = None, 13:34:55 ) -> Self: 13:34:55 """Return a new Retry object with incremented retry counters. 13:34:55 13:34:55 :param response: A response object, or None, if the server did not 13:34:55 return a response. 13:34:55 :type response: :class:`~urllib3.response.BaseHTTPResponse` 13:34:55 :param Exception error: An error encountered during the request, or 13:34:55 None if the response was received successfully. 13:34:55 13:34:55 :return: A new ``Retry`` object. 13:34:55 """ 13:34:55 if self.total is False and error: 13:34:55 # Disabled, indicate to re-raise the error. 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 13:34:55 total = self.total 13:34:55 if total is not None: 13:34:55 total -= 1 13:34:55 13:34:55 connect = self.connect 13:34:55 read = self.read 13:34:55 redirect = self.redirect 13:34:55 status_count = self.status 13:34:55 other = self.other 13:34:55 cause = "unknown" 13:34:55 status = None 13:34:55 redirect_location = None 13:34:55 13:34:55 if error and self._is_connection_error(error): 13:34:55 # Connect retry? 13:34:55 if connect is False: 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif connect is not None: 13:34:55 connect -= 1 13:34:55 13:34:55 elif error and self._is_read_error(error): 13:34:55 # Read retry? 13:34:55 if read is False or method is None or not self._is_method_retryable(method): 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif read is not None: 13:34:55 read -= 1 13:34:55 13:34:55 elif error: 13:34:55 # Other retry? 13:34:55 if other is not None: 13:34:55 other -= 1 13:34:55 13:34:55 elif response and response.get_redirect_location(): 13:34:55 # Redirect retry? 13:34:55 if redirect is not None: 13:34:55 redirect -= 1 13:34:55 cause = "too many redirects" 13:34:55 response_redirect_location = response.get_redirect_location() 13:34:55 if response_redirect_location: 13:34:55 redirect_location = response_redirect_location 13:34:55 status = response.status 13:34:55 13:34:55 else: 13:34:55 # Incrementing because of a server error like a 500 in 13:34:55 # status_forcelist and the given method is in the allowed_methods 13:34:55 cause = ResponseError.GENERIC_ERROR 13:34:55 if response and response.status: 13:34:55 if status_count is not None: 13:34:55 status_count -= 1 13:34:55 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 13:34:55 status = response.status 13:34:55 13:34:55 history = self.history + ( 13:34:55 RequestHistory(method, url, error, status, redirect_location), 13:34:55 ) 13:34:55 13:34:55 new_retry = self.new( 13:34:55 total=total, 13:34:55 connect=connect, 13:34:55 read=read, 13:34:55 redirect=redirect, 13:34:55 status=status_count, 13:34:55 other=other, 13:34:55 history=history, 13:34:55 ) 13:34:55 13:34:55 if new_retry.is_exhausted(): 13:34:55 reason = error or ResponseError(cause) 13:34:55 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 13:34:55 13:34:55 During handling of the above exception, another exception occurred: 13:34:55 13:34:55 self = 13:34:55 13:34:55 def test_19_rdm_device_disconnection(self): 13:34:55 > response = test_utils.unmount_device("ROADMA01") 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 13:34:55 transportpce_tests/1.2.1/test01_portmapping.py:213: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 transportpce_tests/common/test_utils.py:398: in unmount_device 13:34:55 response = delete_request(url[RESTCONF_VERSION].format('{}', node)) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 transportpce_tests/common/test_utils.py:134: in delete_request 13:34:55 return requests.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 13:34:55 return session.request(method=method, url=url, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 13:34:55 resp = self.send(prep, **send_kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 13:34:55 r = adapter.send(request, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 except (ProtocolError, OSError) as err: 13:34:55 raise ConnectionError(err, request=request) 13:34:55 13:34:55 except MaxRetryError as e: 13:34:55 if isinstance(e.reason, ConnectTimeoutError): 13:34:55 # TODO: Remove this in 3.0.0: see #2811 13:34:55 if not isinstance(e.reason, NewConnectionError): 13:34:55 raise ConnectTimeout(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, ResponseError): 13:34:55 raise RetryError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _ProxyError): 13:34:55 raise ProxyError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _SSLError): 13:34:55 # This branch is for urllib3 v1.22 and later. 13:34:55 raise SSLError(e, request=request) 13:34:55 13:34:55 > raise ConnectionError(e, request=request) 13:34:55 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 13:34:55 ----------------------------- Captured stdout call ----------------------------- 13:34:55 execution of test_19_rdm_device_disconnection 13:34:55 _________ TestTransportPCEPortmapping.test_20_rdm_device_disconnected __________ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 > sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 13:34:55 raise err 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 address = ('localhost', 8191), timeout = 30, source_address = None 13:34:55 socket_options = [(6, 1, 1)] 13:34:55 13:34:55 def create_connection( 13:34:55 address: tuple[str, int], 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 source_address: tuple[str, int] | None = None, 13:34:55 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 13:34:55 ) -> socket.socket: 13:34:55 """Connect to *address* and return the socket object. 13:34:55 13:34:55 Convenience function. Connect to *address* (a 2-tuple ``(host, 13:34:55 port)``) and return the socket object. Passing the optional 13:34:55 *timeout* parameter will set the timeout on the socket instance 13:34:55 before attempting to connect. If no *timeout* is supplied, the 13:34:55 global default timeout setting returned by :func:`socket.getdefaulttimeout` 13:34:55 is used. If *source_address* is set it must be a tuple of (host, port) 13:34:55 for the socket to bind as a source address before making the connection. 13:34:55 An host of '' or port 0 tells the OS to use the default. 13:34:55 """ 13:34:55 13:34:55 host, port = address 13:34:55 if host.startswith("["): 13:34:55 host = host.strip("[]") 13:34:55 err = None 13:34:55 13:34:55 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 13:34:55 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 13:34:55 # The original create_connection function always returns all records. 13:34:55 family = allowed_gai_family() 13:34:55 13:34:55 try: 13:34:55 host.encode("idna") 13:34:55 except UnicodeError: 13:34:55 raise LocationParseError(f"'{host}', label empty or too long") from None 13:34:55 13:34:55 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 13:34:55 af, socktype, proto, canonname, sa = res 13:34:55 sock = None 13:34:55 try: 13:34:55 sock = socket.socket(af, socktype, proto) 13:34:55 13:34:55 # If provided, set socket level options before connecting. 13:34:55 _set_socket_options(sock, socket_options) 13:34:55 13:34:55 if timeout is not _DEFAULT_TIMEOUT: 13:34:55 sock.settimeout(timeout) 13:34:55 if source_address: 13:34:55 sock.bind(source_address) 13:34:55 > sock.connect(sa) 13:34:55 E ConnectionRefusedError: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig' 13:34:55 body = None 13:34:55 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 13:34:55 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 redirect = False, assert_same_host = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 13:34:55 release_conn = False, chunked = False, body_pos = None, preload_content = False 13:34:55 decode_content = False, response_kw = {} 13:34:55 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01', query='content=nonconfig', fragment=None) 13:34:55 destination_scheme = None, conn = None, release_this_conn = True 13:34:55 http_tunnel_required = False, err = None, clean_exit = False 13:34:55 13:34:55 def urlopen( # type: ignore[override] 13:34:55 self, 13:34:55 method: str, 13:34:55 url: str, 13:34:55 body: _TYPE_BODY | None = None, 13:34:55 headers: typing.Mapping[str, str] | None = None, 13:34:55 retries: Retry | bool | int | None = None, 13:34:55 redirect: bool = True, 13:34:55 assert_same_host: bool = True, 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 pool_timeout: int | None = None, 13:34:55 release_conn: bool | None = None, 13:34:55 chunked: bool = False, 13:34:55 body_pos: _TYPE_BODY_POSITION | None = None, 13:34:55 preload_content: bool = True, 13:34:55 decode_content: bool = True, 13:34:55 **response_kw: typing.Any, 13:34:55 ) -> BaseHTTPResponse: 13:34:55 """ 13:34:55 Get a connection from the pool and perform an HTTP request. This is the 13:34:55 lowest level call for making a request, so you'll need to specify all 13:34:55 the raw details. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 More commonly, it's appropriate to use a convenience method 13:34:55 such as :meth:`request`. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 `release_conn` will only behave as expected if 13:34:55 `preload_content=False` because we want to make 13:34:55 `preload_content=False` the default behaviour someday soon without 13:34:55 breaking backwards compatibility. 13:34:55 13:34:55 :param method: 13:34:55 HTTP request method (such as GET, POST, PUT, etc.) 13:34:55 13:34:55 :param url: 13:34:55 The URL to perform the request on. 13:34:55 13:34:55 :param body: 13:34:55 Data to send in the request body, either :class:`str`, :class:`bytes`, 13:34:55 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 13:34:55 13:34:55 :param headers: 13:34:55 Dictionary of custom headers to send, such as User-Agent, 13:34:55 If-None-Match, etc. If None, pool headers are used. If provided, 13:34:55 these headers completely replace any pool-specific headers. 13:34:55 13:34:55 :param retries: 13:34:55 Configure the number of retries to allow before raising a 13:34:55 :class:`~urllib3.exceptions.MaxRetryError` exception. 13:34:55 13:34:55 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 13:34:55 :class:`~urllib3.util.retry.Retry` object for fine-grained control 13:34:55 over different types of retries. 13:34:55 Pass an integer number to retry connection errors that many times, 13:34:55 but no other types of errors. Pass zero to never retry. 13:34:55 13:34:55 If ``False``, then retries are disabled and any exception is raised 13:34:55 immediately. Also, instead of raising a MaxRetryError on redirects, 13:34:55 the redirect response will be returned. 13:34:55 13:34:55 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 13:34:55 13:34:55 :param redirect: 13:34:55 If True, automatically handle redirects (status codes 301, 302, 13:34:55 303, 307, 308). Each redirect counts as a retry. Disabling retries 13:34:55 will disable redirect, too. 13:34:55 13:34:55 :param assert_same_host: 13:34:55 If ``True``, will make sure that the host of the pool requests is 13:34:55 consistent else will raise HostChangedError. When ``False``, you can 13:34:55 use the pool on an HTTP proxy and request foreign hosts. 13:34:55 13:34:55 :param timeout: 13:34:55 If specified, overrides the default timeout for this one 13:34:55 request. It may be a float (in seconds) or an instance of 13:34:55 :class:`urllib3.util.Timeout`. 13:34:55 13:34:55 :param pool_timeout: 13:34:55 If set and the pool is set to block=True, then this method will 13:34:55 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 13:34:55 connection is available within the time period. 13:34:55 13:34:55 :param bool preload_content: 13:34:55 If True, the response's body will be preloaded into memory. 13:34:55 13:34:55 :param bool decode_content: 13:34:55 If True, will attempt to decode the body based on the 13:34:55 'content-encoding' header. 13:34:55 13:34:55 :param release_conn: 13:34:55 If False, then the urlopen call will not release the connection 13:34:55 back into the pool once a response is received (but will release if 13:34:55 you read the entire contents of the response such as when 13:34:55 `preload_content=True`). This is useful if you're not preloading 13:34:55 the response's content immediately. You will need to call 13:34:55 ``r.release_conn()`` on the response ``r`` to return the connection 13:34:55 back into the pool. If None, it takes the value of ``preload_content`` 13:34:55 which defaults to ``True``. 13:34:55 13:34:55 :param bool chunked: 13:34:55 If True, urllib3 will send the body using chunked transfer 13:34:55 encoding. Otherwise, urllib3 will send the body using the standard 13:34:55 content-length form. Defaults to False. 13:34:55 13:34:55 :param int body_pos: 13:34:55 Position to seek to in file-like body in the event of a retry or 13:34:55 redirect. Typically this won't need to be set because urllib3 will 13:34:55 auto-populate the value when needed. 13:34:55 """ 13:34:55 parsed_url = parse_url(url) 13:34:55 destination_scheme = parsed_url.scheme 13:34:55 13:34:55 if headers is None: 13:34:55 headers = self.headers 13:34:55 13:34:55 if not isinstance(retries, Retry): 13:34:55 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 13:34:55 13:34:55 if release_conn is None: 13:34:55 release_conn = preload_content 13:34:55 13:34:55 # Check host 13:34:55 if assert_same_host and not self.is_same_host(url): 13:34:55 raise HostChangedError(self, url, retries) 13:34:55 13:34:55 # Ensure that the URL we're connecting to is properly encoded 13:34:55 if url.startswith("/"): 13:34:55 url = to_str(_encode_target(url)) 13:34:55 else: 13:34:55 url = to_str(parsed_url.url) 13:34:55 13:34:55 conn = None 13:34:55 13:34:55 # Track whether `conn` needs to be released before 13:34:55 # returning/raising/recursing. Update this variable if necessary, and 13:34:55 # leave `release_conn` constant throughout the function. That way, if 13:34:55 # the function recurses, the original value of `release_conn` will be 13:34:55 # passed down into the recursive call, and its value will be respected. 13:34:55 # 13:34:55 # See issue #651 [1] for details. 13:34:55 # 13:34:55 # [1] 13:34:55 release_this_conn = release_conn 13:34:55 13:34:55 http_tunnel_required = connection_requires_http_tunnel( 13:34:55 self.proxy, self.proxy_config, destination_scheme 13:34:55 ) 13:34:55 13:34:55 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 13:34:55 # have to copy the headers dict so we can safely change it without those 13:34:55 # changes being reflected in anyone else's copy. 13:34:55 if not http_tunnel_required: 13:34:55 headers = headers.copy() # type: ignore[attr-defined] 13:34:55 headers.update(self.proxy_headers) # type: ignore[union-attr] 13:34:55 13:34:55 # Must keep the exception bound to a separate variable or else Python 3 13:34:55 # complains about UnboundLocalError. 13:34:55 err = None 13:34:55 13:34:55 # Keep track of whether we cleanly exited the except block. This 13:34:55 # ensures we do proper cleanup in finally. 13:34:55 clean_exit = False 13:34:55 13:34:55 # Rewind body position, if needed. Record current position 13:34:55 # for future rewinds in the event of a redirect/retry. 13:34:55 body_pos = set_file_position(body, body_pos) 13:34:55 13:34:55 try: 13:34:55 # Request a connection from the queue. 13:34:55 timeout_obj = self._get_timeout(timeout) 13:34:55 conn = self._get_conn(timeout=pool_timeout) 13:34:55 13:34:55 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 13:34:55 13:34:55 # Is this a closed/new connection that requires CONNECT tunnelling? 13:34:55 if self.proxy is not None and http_tunnel_required and conn.is_closed: 13:34:55 try: 13:34:55 self._prepare_proxy(conn) 13:34:55 except (BaseSSLError, OSError, SocketTimeout) as e: 13:34:55 self._raise_timeout( 13:34:55 err=e, url=self.proxy.url, timeout_value=conn.timeout 13:34:55 ) 13:34:55 raise 13:34:55 13:34:55 # If we're going to release the connection in ``finally:``, then 13:34:55 # the response doesn't need to know about the connection. Otherwise 13:34:55 # it will also try to release it and we'll have a double-release 13:34:55 # mess. 13:34:55 response_conn = conn if not release_conn else None 13:34:55 13:34:55 # Make the request on the HTTPConnection object 13:34:55 > response = self._make_request( 13:34:55 conn, 13:34:55 method, 13:34:55 url, 13:34:55 timeout=timeout_obj, 13:34:55 body=body, 13:34:55 headers=headers, 13:34:55 chunked=chunked, 13:34:55 retries=retries, 13:34:55 response_conn=response_conn, 13:34:55 preload_content=preload_content, 13:34:55 decode_content=decode_content, 13:34:55 **response_kw, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 13:34:55 conn.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 13:34:55 self.endheaders() 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 13:34:55 self._send_output(message_body, encode_chunked=encode_chunked) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 13:34:55 self.send(msg) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 13:34:55 self.connect() 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 13:34:55 self.sock = self._new_conn() 13:34:55 ^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 except socket.gaierror as e: 13:34:55 raise NameResolutionError(self.host, self, e) from e 13:34:55 except SocketTimeout as e: 13:34:55 raise ConnectTimeoutError( 13:34:55 self, 13:34:55 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 13:34:55 ) from e 13:34:55 13:34:55 except OSError as e: 13:34:55 > raise NewConnectionError( 13:34:55 self, f"Failed to establish a new connection: {e}" 13:34:55 ) from e 13:34:55 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 > resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 13:34:55 retries = retries.increment( 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig' 13:34:55 response = None 13:34:55 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 13:34:55 _pool = 13:34:55 _stacktrace = 13:34:55 13:34:55 def increment( 13:34:55 self, 13:34:55 method: str | None = None, 13:34:55 url: str | None = None, 13:34:55 response: BaseHTTPResponse | None = None, 13:34:55 error: Exception | None = None, 13:34:55 _pool: ConnectionPool | None = None, 13:34:55 _stacktrace: TracebackType | None = None, 13:34:55 ) -> Self: 13:34:55 """Return a new Retry object with incremented retry counters. 13:34:55 13:34:55 :param response: A response object, or None, if the server did not 13:34:55 return a response. 13:34:55 :type response: :class:`~urllib3.response.BaseHTTPResponse` 13:34:55 :param Exception error: An error encountered during the request, or 13:34:55 None if the response was received successfully. 13:34:55 13:34:55 :return: A new ``Retry`` object. 13:34:55 """ 13:34:55 if self.total is False and error: 13:34:55 # Disabled, indicate to re-raise the error. 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 13:34:55 total = self.total 13:34:55 if total is not None: 13:34:55 total -= 1 13:34:55 13:34:55 connect = self.connect 13:34:55 read = self.read 13:34:55 redirect = self.redirect 13:34:55 status_count = self.status 13:34:55 other = self.other 13:34:55 cause = "unknown" 13:34:55 status = None 13:34:55 redirect_location = None 13:34:55 13:34:55 if error and self._is_connection_error(error): 13:34:55 # Connect retry? 13:34:55 if connect is False: 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif connect is not None: 13:34:55 connect -= 1 13:34:55 13:34:55 elif error and self._is_read_error(error): 13:34:55 # Read retry? 13:34:55 if read is False or method is None or not self._is_method_retryable(method): 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif read is not None: 13:34:55 read -= 1 13:34:55 13:34:55 elif error: 13:34:55 # Other retry? 13:34:55 if other is not None: 13:34:55 other -= 1 13:34:55 13:34:55 elif response and response.get_redirect_location(): 13:34:55 # Redirect retry? 13:34:55 if redirect is not None: 13:34:55 redirect -= 1 13:34:55 cause = "too many redirects" 13:34:55 response_redirect_location = response.get_redirect_location() 13:34:55 if response_redirect_location: 13:34:55 redirect_location = response_redirect_location 13:34:55 status = response.status 13:34:55 13:34:55 else: 13:34:55 # Incrementing because of a server error like a 500 in 13:34:55 # status_forcelist and the given method is in the allowed_methods 13:34:55 cause = ResponseError.GENERIC_ERROR 13:34:55 if response and response.status: 13:34:55 if status_count is not None: 13:34:55 status_count -= 1 13:34:55 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 13:34:55 status = response.status 13:34:55 13:34:55 history = self.history + ( 13:34:55 RequestHistory(method, url, error, status, redirect_location), 13:34:55 ) 13:34:55 13:34:55 new_retry = self.new( 13:34:55 total=total, 13:34:55 connect=connect, 13:34:55 read=read, 13:34:55 redirect=redirect, 13:34:55 status=status_count, 13:34:55 other=other, 13:34:55 history=history, 13:34:55 ) 13:34:55 13:34:55 if new_retry.is_exhausted(): 13:34:55 reason = error or ResponseError(cause) 13:34:55 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 13:34:55 13:34:55 During handling of the above exception, another exception occurred: 13:34:55 13:34:55 self = 13:34:55 13:34:55 def test_20_rdm_device_disconnected(self): 13:34:55 > response = test_utils.check_device_connection("ROADMA01") 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 13:34:55 transportpce_tests/1.2.1/test01_portmapping.py:217: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 transportpce_tests/common/test_utils.py:409: in check_device_connection 13:34:55 response = get_request(url[RESTCONF_VERSION].format('{}', node)) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 transportpce_tests/common/test_utils.py:117: in get_request 13:34:55 return requests.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 13:34:55 return session.request(method=method, url=url, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 13:34:55 resp = self.send(prep, **send_kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 13:34:55 r = adapter.send(request, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 except (ProtocolError, OSError) as err: 13:34:55 raise ConnectionError(err, request=request) 13:34:55 13:34:55 except MaxRetryError as e: 13:34:55 if isinstance(e.reason, ConnectTimeoutError): 13:34:55 # TODO: Remove this in 3.0.0: see #2811 13:34:55 if not isinstance(e.reason, NewConnectionError): 13:34:55 raise ConnectTimeout(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, ResponseError): 13:34:55 raise RetryError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _ProxyError): 13:34:55 raise ProxyError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _SSLError): 13:34:55 # This branch is for urllib3 v1.22 and later. 13:34:55 raise SSLError(e, request=request) 13:34:55 13:34:55 > raise ConnectionError(e, request=request) 13:34:55 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 13:34:55 ----------------------------- Captured stdout call ----------------------------- 13:34:55 execution of test_20_rdm_device_disconnected 13:34:55 _________ TestTransportPCEPortmapping.test_21_rdm_device_not_connected _________ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 > sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 13:34:55 raise err 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 address = ('localhost', 8191), timeout = 30, source_address = None 13:34:55 socket_options = [(6, 1, 1)] 13:34:55 13:34:55 def create_connection( 13:34:55 address: tuple[str, int], 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 source_address: tuple[str, int] | None = None, 13:34:55 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 13:34:55 ) -> socket.socket: 13:34:55 """Connect to *address* and return the socket object. 13:34:55 13:34:55 Convenience function. Connect to *address* (a 2-tuple ``(host, 13:34:55 port)``) and return the socket object. Passing the optional 13:34:55 *timeout* parameter will set the timeout on the socket instance 13:34:55 before attempting to connect. If no *timeout* is supplied, the 13:34:55 global default timeout setting returned by :func:`socket.getdefaulttimeout` 13:34:55 is used. If *source_address* is set it must be a tuple of (host, port) 13:34:55 for the socket to bind as a source address before making the connection. 13:34:55 An host of '' or port 0 tells the OS to use the default. 13:34:55 """ 13:34:55 13:34:55 host, port = address 13:34:55 if host.startswith("["): 13:34:55 host = host.strip("[]") 13:34:55 err = None 13:34:55 13:34:55 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 13:34:55 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 13:34:55 # The original create_connection function always returns all records. 13:34:55 family = allowed_gai_family() 13:34:55 13:34:55 try: 13:34:55 host.encode("idna") 13:34:55 except UnicodeError: 13:34:55 raise LocationParseError(f"'{host}', label empty or too long") from None 13:34:55 13:34:55 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 13:34:55 af, socktype, proto, canonname, sa = res 13:34:55 sock = None 13:34:55 try: 13:34:55 sock = socket.socket(af, socktype, proto) 13:34:55 13:34:55 # If provided, set socket level options before connecting. 13:34:55 _set_socket_options(sock, socket_options) 13:34:55 13:34:55 if timeout is not _DEFAULT_TIMEOUT: 13:34:55 sock.settimeout(timeout) 13:34:55 if source_address: 13:34:55 sock.bind(source_address) 13:34:55 > sock.connect(sa) 13:34:55 E ConnectionRefusedError: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info' 13:34:55 body = None 13:34:55 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 13:34:55 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 redirect = False, assert_same_host = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 13:34:55 release_conn = False, chunked = False, body_pos = None, preload_content = False 13:34:55 decode_content = False, response_kw = {} 13:34:55 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info', query=None, fragment=None) 13:34:55 destination_scheme = None, conn = None, release_this_conn = True 13:34:55 http_tunnel_required = False, err = None, clean_exit = False 13:34:55 13:34:55 def urlopen( # type: ignore[override] 13:34:55 self, 13:34:55 method: str, 13:34:55 url: str, 13:34:55 body: _TYPE_BODY | None = None, 13:34:55 headers: typing.Mapping[str, str] | None = None, 13:34:55 retries: Retry | bool | int | None = None, 13:34:55 redirect: bool = True, 13:34:55 assert_same_host: bool = True, 13:34:55 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 13:34:55 pool_timeout: int | None = None, 13:34:55 release_conn: bool | None = None, 13:34:55 chunked: bool = False, 13:34:55 body_pos: _TYPE_BODY_POSITION | None = None, 13:34:55 preload_content: bool = True, 13:34:55 decode_content: bool = True, 13:34:55 **response_kw: typing.Any, 13:34:55 ) -> BaseHTTPResponse: 13:34:55 """ 13:34:55 Get a connection from the pool and perform an HTTP request. This is the 13:34:55 lowest level call for making a request, so you'll need to specify all 13:34:55 the raw details. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 More commonly, it's appropriate to use a convenience method 13:34:55 such as :meth:`request`. 13:34:55 13:34:55 .. note:: 13:34:55 13:34:55 `release_conn` will only behave as expected if 13:34:55 `preload_content=False` because we want to make 13:34:55 `preload_content=False` the default behaviour someday soon without 13:34:55 breaking backwards compatibility. 13:34:55 13:34:55 :param method: 13:34:55 HTTP request method (such as GET, POST, PUT, etc.) 13:34:55 13:34:55 :param url: 13:34:55 The URL to perform the request on. 13:34:55 13:34:55 :param body: 13:34:55 Data to send in the request body, either :class:`str`, :class:`bytes`, 13:34:55 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 13:34:55 13:34:55 :param headers: 13:34:55 Dictionary of custom headers to send, such as User-Agent, 13:34:55 If-None-Match, etc. If None, pool headers are used. If provided, 13:34:55 these headers completely replace any pool-specific headers. 13:34:55 13:34:55 :param retries: 13:34:55 Configure the number of retries to allow before raising a 13:34:55 :class:`~urllib3.exceptions.MaxRetryError` exception. 13:34:55 13:34:55 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 13:34:55 :class:`~urllib3.util.retry.Retry` object for fine-grained control 13:34:55 over different types of retries. 13:34:55 Pass an integer number to retry connection errors that many times, 13:34:55 but no other types of errors. Pass zero to never retry. 13:34:55 13:34:55 If ``False``, then retries are disabled and any exception is raised 13:34:55 immediately. Also, instead of raising a MaxRetryError on redirects, 13:34:55 the redirect response will be returned. 13:34:55 13:34:55 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 13:34:55 13:34:55 :param redirect: 13:34:55 If True, automatically handle redirects (status codes 301, 302, 13:34:55 303, 307, 308). Each redirect counts as a retry. Disabling retries 13:34:55 will disable redirect, too. 13:34:55 13:34:55 :param assert_same_host: 13:34:55 If ``True``, will make sure that the host of the pool requests is 13:34:55 consistent else will raise HostChangedError. When ``False``, you can 13:34:55 use the pool on an HTTP proxy and request foreign hosts. 13:34:55 13:34:55 :param timeout: 13:34:55 If specified, overrides the default timeout for this one 13:34:55 request. It may be a float (in seconds) or an instance of 13:34:55 :class:`urllib3.util.Timeout`. 13:34:55 13:34:55 :param pool_timeout: 13:34:55 If set and the pool is set to block=True, then this method will 13:34:55 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 13:34:55 connection is available within the time period. 13:34:55 13:34:55 :param bool preload_content: 13:34:55 If True, the response's body will be preloaded into memory. 13:34:55 13:34:55 :param bool decode_content: 13:34:55 If True, will attempt to decode the body based on the 13:34:55 'content-encoding' header. 13:34:55 13:34:55 :param release_conn: 13:34:55 If False, then the urlopen call will not release the connection 13:34:55 back into the pool once a response is received (but will release if 13:34:55 you read the entire contents of the response such as when 13:34:55 `preload_content=True`). This is useful if you're not preloading 13:34:55 the response's content immediately. You will need to call 13:34:55 ``r.release_conn()`` on the response ``r`` to return the connection 13:34:55 back into the pool. If None, it takes the value of ``preload_content`` 13:34:55 which defaults to ``True``. 13:34:55 13:34:55 :param bool chunked: 13:34:55 If True, urllib3 will send the body using chunked transfer 13:34:55 encoding. Otherwise, urllib3 will send the body using the standard 13:34:55 content-length form. Defaults to False. 13:34:55 13:34:55 :param int body_pos: 13:34:55 Position to seek to in file-like body in the event of a retry or 13:34:55 redirect. Typically this won't need to be set because urllib3 will 13:34:55 auto-populate the value when needed. 13:34:55 """ 13:34:55 parsed_url = parse_url(url) 13:34:55 destination_scheme = parsed_url.scheme 13:34:55 13:34:55 if headers is None: 13:34:55 headers = self.headers 13:34:55 13:34:55 if not isinstance(retries, Retry): 13:34:55 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 13:34:55 13:34:55 if release_conn is None: 13:34:55 release_conn = preload_content 13:34:55 13:34:55 # Check host 13:34:55 if assert_same_host and not self.is_same_host(url): 13:34:55 raise HostChangedError(self, url, retries) 13:34:55 13:34:55 # Ensure that the URL we're connecting to is properly encoded 13:34:55 if url.startswith("/"): 13:34:55 url = to_str(_encode_target(url)) 13:34:55 else: 13:34:55 url = to_str(parsed_url.url) 13:34:55 13:34:55 conn = None 13:34:55 13:34:55 # Track whether `conn` needs to be released before 13:34:55 # returning/raising/recursing. Update this variable if necessary, and 13:34:55 # leave `release_conn` constant throughout the function. That way, if 13:34:55 # the function recurses, the original value of `release_conn` will be 13:34:55 # passed down into the recursive call, and its value will be respected. 13:34:55 # 13:34:55 # See issue #651 [1] for details. 13:34:55 # 13:34:55 # [1] 13:34:55 release_this_conn = release_conn 13:34:55 13:34:55 http_tunnel_required = connection_requires_http_tunnel( 13:34:55 self.proxy, self.proxy_config, destination_scheme 13:34:55 ) 13:34:55 13:34:55 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 13:34:55 # have to copy the headers dict so we can safely change it without those 13:34:55 # changes being reflected in anyone else's copy. 13:34:55 if not http_tunnel_required: 13:34:55 headers = headers.copy() # type: ignore[attr-defined] 13:34:55 headers.update(self.proxy_headers) # type: ignore[union-attr] 13:34:55 13:34:55 # Must keep the exception bound to a separate variable or else Python 3 13:34:55 # complains about UnboundLocalError. 13:34:55 err = None 13:34:55 13:34:55 # Keep track of whether we cleanly exited the except block. This 13:34:55 # ensures we do proper cleanup in finally. 13:34:55 clean_exit = False 13:34:55 13:34:55 # Rewind body position, if needed. Record current position 13:34:55 # for future rewinds in the event of a redirect/retry. 13:34:55 body_pos = set_file_position(body, body_pos) 13:34:55 13:34:55 try: 13:34:55 # Request a connection from the queue. 13:34:55 timeout_obj = self._get_timeout(timeout) 13:34:55 conn = self._get_conn(timeout=pool_timeout) 13:34:55 13:34:55 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 13:34:55 13:34:55 # Is this a closed/new connection that requires CONNECT tunnelling? 13:34:55 if self.proxy is not None and http_tunnel_required and conn.is_closed: 13:34:55 try: 13:34:55 self._prepare_proxy(conn) 13:34:55 except (BaseSSLError, OSError, SocketTimeout) as e: 13:34:55 self._raise_timeout( 13:34:55 err=e, url=self.proxy.url, timeout_value=conn.timeout 13:34:55 ) 13:34:55 raise 13:34:55 13:34:55 # If we're going to release the connection in ``finally:``, then 13:34:55 # the response doesn't need to know about the connection. Otherwise 13:34:55 # it will also try to release it and we'll have a double-release 13:34:55 # mess. 13:34:55 response_conn = conn if not release_conn else None 13:34:55 13:34:55 # Make the request on the HTTPConnection object 13:34:55 > response = self._make_request( 13:34:55 conn, 13:34:55 method, 13:34:55 url, 13:34:55 timeout=timeout_obj, 13:34:55 body=body, 13:34:55 headers=headers, 13:34:55 chunked=chunked, 13:34:55 retries=retries, 13:34:55 response_conn=response_conn, 13:34:55 preload_content=preload_content, 13:34:55 decode_content=decode_content, 13:34:55 **response_kw, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 13:34:55 conn.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 13:34:55 self.endheaders() 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 13:34:55 self._send_output(message_body, encode_chunked=encode_chunked) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 13:34:55 self.send(msg) 13:34:55 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 13:34:55 self.connect() 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 13:34:55 self.sock = self._new_conn() 13:34:55 ^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 13:34:55 def _new_conn(self) -> socket.socket: 13:34:55 """Establish a socket connection and set nodelay settings on it. 13:34:55 13:34:55 :return: New socket connection. 13:34:55 """ 13:34:55 try: 13:34:55 sock = connection.create_connection( 13:34:55 (self._dns_host, self.port), 13:34:55 self.timeout, 13:34:55 source_address=self.source_address, 13:34:55 socket_options=self.socket_options, 13:34:55 ) 13:34:55 except socket.gaierror as e: 13:34:55 raise NameResolutionError(self.host, self, e) from e 13:34:55 except SocketTimeout as e: 13:34:55 raise ConnectTimeoutError( 13:34:55 self, 13:34:55 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 13:34:55 ) from e 13:34:55 13:34:55 except OSError as e: 13:34:55 > raise NewConnectionError( 13:34:55 self, f"Failed to establish a new connection: {e}" 13:34:55 ) from e 13:34:55 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 13:34:55 13:34:55 The above exception was the direct cause of the following exception: 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 > resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 13:34:55 retries = retries.increment( 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 13:34:55 method = 'GET' 13:34:55 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info' 13:34:55 response = None 13:34:55 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 13:34:55 _pool = 13:34:55 _stacktrace = 13:34:55 13:34:55 def increment( 13:34:55 self, 13:34:55 method: str | None = None, 13:34:55 url: str | None = None, 13:34:55 response: BaseHTTPResponse | None = None, 13:34:55 error: Exception | None = None, 13:34:55 _pool: ConnectionPool | None = None, 13:34:55 _stacktrace: TracebackType | None = None, 13:34:55 ) -> Self: 13:34:55 """Return a new Retry object with incremented retry counters. 13:34:55 13:34:55 :param response: A response object, or None, if the server did not 13:34:55 return a response. 13:34:55 :type response: :class:`~urllib3.response.BaseHTTPResponse` 13:34:55 :param Exception error: An error encountered during the request, or 13:34:55 None if the response was received successfully. 13:34:55 13:34:55 :return: A new ``Retry`` object. 13:34:55 """ 13:34:55 if self.total is False and error: 13:34:55 # Disabled, indicate to re-raise the error. 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 13:34:55 total = self.total 13:34:55 if total is not None: 13:34:55 total -= 1 13:34:55 13:34:55 connect = self.connect 13:34:55 read = self.read 13:34:55 redirect = self.redirect 13:34:55 status_count = self.status 13:34:55 other = self.other 13:34:55 cause = "unknown" 13:34:55 status = None 13:34:55 redirect_location = None 13:34:55 13:34:55 if error and self._is_connection_error(error): 13:34:55 # Connect retry? 13:34:55 if connect is False: 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif connect is not None: 13:34:55 connect -= 1 13:34:55 13:34:55 elif error and self._is_read_error(error): 13:34:55 # Read retry? 13:34:55 if read is False or method is None or not self._is_method_retryable(method): 13:34:55 raise reraise(type(error), error, _stacktrace) 13:34:55 elif read is not None: 13:34:55 read -= 1 13:34:55 13:34:55 elif error: 13:34:55 # Other retry? 13:34:55 if other is not None: 13:34:55 other -= 1 13:34:55 13:34:55 elif response and response.get_redirect_location(): 13:34:55 # Redirect retry? 13:34:55 if redirect is not None: 13:34:55 redirect -= 1 13:34:55 cause = "too many redirects" 13:34:55 response_redirect_location = response.get_redirect_location() 13:34:55 if response_redirect_location: 13:34:55 redirect_location = response_redirect_location 13:34:55 status = response.status 13:34:55 13:34:55 else: 13:34:55 # Incrementing because of a server error like a 500 in 13:34:55 # status_forcelist and the given method is in the allowed_methods 13:34:55 cause = ResponseError.GENERIC_ERROR 13:34:55 if response and response.status: 13:34:55 if status_count is not None: 13:34:55 status_count -= 1 13:34:55 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 13:34:55 status = response.status 13:34:55 13:34:55 history = self.history + ( 13:34:55 RequestHistory(method, url, error, status, redirect_location), 13:34:55 ) 13:34:55 13:34:55 new_retry = self.new( 13:34:55 total=total, 13:34:55 connect=connect, 13:34:55 read=read, 13:34:55 redirect=redirect, 13:34:55 status=status_count, 13:34:55 other=other, 13:34:55 history=history, 13:34:55 ) 13:34:55 13:34:55 if new_retry.is_exhausted(): 13:34:55 reason = error or ResponseError(cause) 13:34:55 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 13:34:55 13:34:55 During handling of the above exception, another exception occurred: 13:34:55 13:34:55 self = 13:34:55 13:34:55 def test_21_rdm_device_not_connected(self): 13:34:55 > response = test_utils.get_portmapping_node_attr("ROADMA01", "node-info", None) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 13:34:55 transportpce_tests/1.2.1/test01_portmapping.py:225: 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 13:34:55 response = get_request(target_url) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 transportpce_tests/common/test_utils.py:117: in get_request 13:34:55 return requests.request( 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 13:34:55 return session.request(method=method, url=url, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 13:34:55 resp = self.send(prep, **send_kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 13:34:55 r = adapter.send(request, **kwargs) 13:34:55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 13:34:55 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 13:34:55 13:34:55 self = 13:34:55 request = , stream = False 13:34:55 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 13:34:55 proxies = OrderedDict() 13:34:55 13:34:55 def send( 13:34:55 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 13:34:55 ): 13:34:55 """Sends PreparedRequest object. Returns Response object. 13:34:55 13:34:55 :param request: The :class:`PreparedRequest ` being sent. 13:34:55 :param stream: (optional) Whether to stream the request content. 13:34:55 :param timeout: (optional) How long to wait for the server to send 13:34:55 data before giving up, as a float, or a :ref:`(connect timeout, 13:34:55 read timeout) ` tuple. 13:34:55 :type timeout: float or tuple or urllib3 Timeout object 13:34:55 :param verify: (optional) Either a boolean, in which case it controls whether 13:34:55 we verify the server's TLS certificate, or a string, in which case it 13:34:55 must be a path to a CA bundle to use 13:34:55 :param cert: (optional) Any user-provided SSL certificate to be trusted. 13:34:55 :param proxies: (optional) The proxies dictionary to apply to the request. 13:34:55 :rtype: requests.Response 13:34:55 """ 13:34:55 13:34:55 try: 13:34:55 conn = self.get_connection_with_tls_context( 13:34:55 request, verify, proxies=proxies, cert=cert 13:34:55 ) 13:34:55 except LocationValueError as e: 13:34:55 raise InvalidURL(e, request=request) 13:34:55 13:34:55 self.cert_verify(conn, request.url, verify, cert) 13:34:55 url = self.request_url(request, proxies) 13:34:55 self.add_headers( 13:34:55 request, 13:34:55 stream=stream, 13:34:55 timeout=timeout, 13:34:55 verify=verify, 13:34:55 cert=cert, 13:34:55 proxies=proxies, 13:34:55 ) 13:34:55 13:34:55 chunked = not (request.body is None or "Content-Length" in request.headers) 13:34:55 13:34:55 if isinstance(timeout, tuple): 13:34:55 try: 13:34:55 connect, read = timeout 13:34:55 timeout = TimeoutSauce(connect=connect, read=read) 13:34:55 except ValueError: 13:34:55 raise ValueError( 13:34:55 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 13:34:55 f"or a single float to set both timeouts to the same value." 13:34:55 ) 13:34:55 elif isinstance(timeout, TimeoutSauce): 13:34:55 pass 13:34:55 else: 13:34:55 timeout = TimeoutSauce(connect=timeout, read=timeout) 13:34:55 13:34:55 try: 13:34:55 resp = conn.urlopen( 13:34:55 method=request.method, 13:34:55 url=url, 13:34:55 body=request.body, 13:34:55 headers=request.headers, 13:34:55 redirect=False, 13:34:55 assert_same_host=False, 13:34:55 preload_content=False, 13:34:55 decode_content=False, 13:34:55 retries=self.max_retries, 13:34:55 timeout=timeout, 13:34:55 chunked=chunked, 13:34:55 ) 13:34:55 13:34:55 except (ProtocolError, OSError) as err: 13:34:55 raise ConnectionError(err, request=request) 13:34:55 13:34:55 except MaxRetryError as e: 13:34:55 if isinstance(e.reason, ConnectTimeoutError): 13:34:55 # TODO: Remove this in 3.0.0: see #2811 13:34:55 if not isinstance(e.reason, NewConnectionError): 13:34:55 raise ConnectTimeout(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, ResponseError): 13:34:55 raise RetryError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _ProxyError): 13:34:55 raise ProxyError(e, request=request) 13:34:55 13:34:55 if isinstance(e.reason, _SSLError): 13:34:55 # This branch is for urllib3 v1.22 and later. 13:34:55 raise SSLError(e, request=request) 13:34:55 13:34:55 > raise ConnectionError(e, request=request) 13:34:55 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 13:34:55 13:34:55 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 13:34:55 ----------------------------- Captured stdout call ----------------------------- 13:34:55 execution of test_21_rdm_device_not_connected 13:34:55 --------------------------- Captured stdout teardown --------------------------- 13:34:55 all processes killed 13:34:55 ODL log file stored 13:34:55 =========================== short test summary info ============================ 13:34:55 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_02_rdm_device_connected 13:34:55 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_03_rdm_portmapping_info 13:34:55 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_04_rdm_portmapping_DEG1_TTP_TXRX 13:34:55 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_05_rdm_portmapping_SRG1_PP7_TXRX 13:34:55 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_06_rdm_portmapping_SRG3_PP1_TXRX 13:34:55 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_07_xpdr_device_connection 13:34:55 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_08_xpdr_device_connected 13:34:55 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_09_xpdr_portmapping_info 13:34:55 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_10_xpdr_portmapping_NETWORK1 13:34:55 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_11_xpdr_portmapping_NETWORK2 13:34:55 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_12_xpdr_portmapping_CLIENT1 13:34:55 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_13_xpdr_portmapping_CLIENT2 13:34:55 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_14_xpdr_portmapping_CLIENT3 13:34:55 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_15_xpdr_portmapping_CLIENT4 13:34:55 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_16_xpdr_device_disconnection 13:34:55 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_17_xpdr_device_disconnected 13:34:55 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_18_xpdr_device_not_connected 13:34:55 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_19_rdm_device_disconnection 13:34:55 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_20_rdm_device_disconnected 13:34:55 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_21_rdm_device_not_connected 13:34:55 20 failed, 1 passed in 274.19s (0:04:34) 13:34:55 tests71: OK ✔ in 7 minutes 48.56 seconds 13:34:55 tests200: OK ✔ in 3 minutes 45.01 seconds 13:34:55 tests121: exit 1 (274.52 seconds) /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 1.2.1 pid=9435 13:37:33 ... [100%] 13:38:44 51 passed in 502.92s (0:08:22) 13:38:44 pytest -q transportpce_tests/tapi/test02_full_topology.py 13:39:36 .................................... [100%] 13:44:13 36 passed in 329.01s (0:05:29) 13:44:13 pytest -q transportpce_tests/tapi/test03_tapi_device_change_notifications.py 13:45:00 ....................................................................... [100%] 13:49:31 71 passed in 317.38s (0:05:17) 13:49:31 pytest -q transportpce_tests/tapi/test04_topo_extension.py 13:50:22 ................... [100%] 13:51:53 19 passed in 141.87s (0:02:21) 13:51:53 pytest -q transportpce_tests/tapi/test05_pce_tapi.py 13:53:55 ...................... [100%] 13:59:31 22 passed in 457.26s (0:07:37) 13:59:31 tests121: FAIL ✖ in 4 minutes 42.72 seconds 13:59:31 tests_tapi: OK ✔ in 29 minutes 18.04 seconds 13:59:31 tests221: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 13:59:38 tests221: freeze> python -m pip freeze --all 13:59:38 tests221: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 13:59:38 tests221: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 2.2.1 13:59:38 using environment variables from ./karaf221.env 13:59:38 pytest -q transportpce_tests/2.2.1/test01_portmapping.py 14:00:14 ................................... [100%] 14:00:54 35 passed in 75.81s (0:01:15) 14:00:55 pytest -q transportpce_tests/2.2.1/test02_topo_portmapping.py 14:01:25 ...... [100%] 14:01:39 6 passed in 44.22s 14:01:39 pytest -q transportpce_tests/2.2.1/test03_topology.py 14:02:23 ............................................ [100%] 14:04:01 44 passed in 142.08s (0:02:22) 14:04:01 pytest -q transportpce_tests/2.2.1/test04_otn_topology.py 14:04:37 ............ [100%] 14:05:01 12 passed in 59.46s 14:05:01 pytest -q transportpce_tests/2.2.1/test05_flex_grid.py 14:05:27 ................ [100%] 14:06:56 16 passed in 114.33s (0:01:54) 14:06:56 pytest -q transportpce_tests/2.2.1/test06_renderer_service_path_nominal.py 14:07:25 ............................... [100%] 14:07:31 31 passed in 35.38s 14:07:31 pytest -q transportpce_tests/2.2.1/test07_otn_renderer.py 14:08:07 .......................... [100%] 14:09:03 26 passed in 91.33s (0:01:31) 14:09:03 pytest -q transportpce_tests/2.2.1/test08_otn_sh_renderer.py 14:09:41 ...................... [100%] 14:10:44 22 passed in 101.10s (0:01:41) 14:10:45 pytest -q transportpce_tests/2.2.1/test09_olm.py 14:11:27 ........................................ [100%] 14:13:49 40 passed in 184.05s (0:03:04) 14:13:49 pytest -q transportpce_tests/2.2.1/test11_otn_end2end.py 14:14:33 ........................................................................ [ 74%] 14:20:10 ......................... [100%] 14:22:02 97 passed in 492.68s (0:08:12) 14:22:02 pytest -q transportpce_tests/2.2.1/test12_end2end.py 14:22:43 ...................................................... [100%] 14:32:30 54 passed in 628.21s (0:10:28) 14:32:30 pytest -q transportpce_tests/2.2.1/test14_otn_switch_end2end.py 14:33:26 ........................................................................ [ 71%] 14:38:35 ............................. [100%] 14:43:44 101 passed in 673.20s (0:11:13) 14:43:44 pytest -q transportpce_tests/2.2.1/test15_otn_end2end_with_intermediate_switch.py 14:44:40 ........................................................................ [ 67%] 14:50:28 ................................... [100%] 14:53:48 107 passed in 604.07s (0:10:04) 14:53:48 pytest -q transportpce_tests/2.2.1/test16_freq_end2end.py 14:54:33 ............................................. [100%] 14:57:11 45 passed in 202.57s (0:03:22) 14:57:11 tests221: OK ✔ in 57 minutes 40.54 seconds 14:57:11 tests_hybrid: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 14:57:18 tests_hybrid: freeze> python -m pip freeze --all 14:57:19 tests_hybrid: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 14:57:19 tests_hybrid: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh hybrid 14:57:19 using environment variables from ./karaf221.env 14:57:19 pytest -q transportpce_tests/hybrid/test01_device_change_notifications.py 14:57:59 ................................................... [100%] 14:59:46 51 passed in 146.70s (0:02:26) 14:59:46 pytest -q transportpce_tests/hybrid/test02_B100G_end2end.py 15:00:28 ........................................................................ [ 66%] 15:04:49 ..................................... [100%] 15:09:55 109 passed in 609.15s (0:10:09) 15:09:55 pytest -q transportpce_tests/hybrid/test03_autonomous_reroute.py 15:10:43 ..................................................... [100%] 15:14:15 53 passed in 260.06s (0:04:20) 15:14:15 buildcontroller: OK (103.18=setup[7.50]+cmd[95.68] seconds) 15:14:15 sims: OK (18.05=setup[7.48]+cmd[10.57] seconds) 15:14:15 build_karaf_tests121: OK (63.73=setup[7.93]+cmd[55.80] seconds) 15:14:15 testsPCE: OK (304.72=setup[57.09]+cmd[247.63] seconds) 15:14:15 tests121: FAIL code 1 (282.72=setup[8.21]+cmd[274.52] seconds) 15:14:15 build_karaf_tests221: OK (62.69=setup[7.98]+cmd[54.71] seconds) 15:14:15 tests_tapi: OK (1758.04=setup[8.11]+cmd[1749.93] seconds) 15:14:15 tests221: OK (3460.54=setup[7.82]+cmd[3452.72] seconds) 15:14:15 build_karaf_tests71: OK (63.72=setup[7.96]+cmd[55.76] seconds) 15:14:15 tests71: OK (468.56=setup[7.20]+cmd[461.36] seconds) 15:14:15 build_karaf_tests200: OK (62.71=setup[7.99]+cmd[54.71] seconds) 15:14:15 tests200: OK (225.01=setup[8.12]+cmd[216.89] seconds) 15:14:15 tests_hybrid: OK (1024.28=setup[7.48]+cmd[1016.80] seconds) 15:14:15 buildlighty: OK (40.06=setup[7.51]+cmd[32.55] seconds) 15:14:15 docs: OK (30.97=setup[27.82]+cmd[3.15] seconds) 15:14:15 docs-linkcheck: OK (32.72=setup[27.26]+cmd[5.47] seconds) 15:14:15 checkbashisms: OK (2.99=setup[1.76]+cmd[0.00,0.04,1.19] seconds) 15:14:15 pre-commit: OK (49.99=setup[2.64]+cmd[0.00,0.00,39.72,7.62] seconds) 15:14:15 pylint: OK (30.73=setup[3.95]+cmd[26.78] seconds) 15:14:15 evaluation failed :( (6714.54 seconds) 15:14:15 + tox_status=1 15:14:15 + echo '---> Completed tox runs' 15:14:15 ---> Completed tox runs 15:14:15 + for i in .tox/*/log 15:14:15 ++ echo .tox/build_karaf_tests121/log 15:14:15 ++ awk -F/ '{print $2}' 15:14:15 + tox_env=build_karaf_tests121 15:14:15 + cp -r .tox/build_karaf_tests121/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests121 15:14:15 + for i in .tox/*/log 15:14:15 ++ echo .tox/build_karaf_tests200/log 15:14:16 ++ awk -F/ '{print $2}' 15:14:16 + tox_env=build_karaf_tests200 15:14:16 + cp -r .tox/build_karaf_tests200/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests200 15:14:16 + for i in .tox/*/log 15:14:16 ++ echo .tox/build_karaf_tests221/log 15:14:16 ++ awk -F/ '{print $2}' 15:14:16 + tox_env=build_karaf_tests221 15:14:16 + cp -r .tox/build_karaf_tests221/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests221 15:14:16 + for i in .tox/*/log 15:14:16 ++ echo .tox/build_karaf_tests71/log 15:14:16 ++ awk -F/ '{print $2}' 15:14:16 + tox_env=build_karaf_tests71 15:14:16 + cp -r .tox/build_karaf_tests71/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests71 15:14:16 + for i in .tox/*/log 15:14:16 ++ echo .tox/buildcontroller/log 15:14:16 ++ awk -F/ '{print $2}' 15:14:16 + tox_env=buildcontroller 15:14:16 + cp -r .tox/buildcontroller/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/buildcontroller 15:14:16 + for i in .tox/*/log 15:14:16 ++ echo .tox/buildlighty/log 15:14:16 ++ awk -F/ '{print $2}' 15:14:16 + tox_env=buildlighty 15:14:16 + cp -r .tox/buildlighty/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/buildlighty 15:14:16 + for i in .tox/*/log 15:14:16 ++ echo .tox/checkbashisms/log 15:14:16 ++ awk -F/ '{print $2}' 15:14:16 + tox_env=checkbashisms 15:14:16 + cp -r .tox/checkbashisms/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/checkbashisms 15:14:16 + for i in .tox/*/log 15:14:16 ++ echo .tox/docs-linkcheck/log 15:14:16 ++ awk -F/ '{print $2}' 15:14:16 + tox_env=docs-linkcheck 15:14:16 + cp -r .tox/docs-linkcheck/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/docs-linkcheck 15:14:16 + for i in .tox/*/log 15:14:16 ++ echo .tox/docs/log 15:14:16 ++ awk -F/ '{print $2}' 15:14:16 + tox_env=docs 15:14:16 + cp -r .tox/docs/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/docs 15:14:16 + for i in .tox/*/log 15:14:16 ++ echo .tox/pre-commit/log 15:14:16 ++ awk -F/ '{print $2}' 15:14:16 + tox_env=pre-commit 15:14:16 + cp -r .tox/pre-commit/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/pre-commit 15:14:16 + for i in .tox/*/log 15:14:16 ++ echo .tox/pylint/log 15:14:16 ++ awk -F/ '{print $2}' 15:14:16 + tox_env=pylint 15:14:16 + cp -r .tox/pylint/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/pylint 15:14:16 + for i in .tox/*/log 15:14:16 ++ echo .tox/sims/log 15:14:16 ++ awk -F/ '{print $2}' 15:14:16 + tox_env=sims 15:14:16 + cp -r .tox/sims/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/sims 15:14:16 + for i in .tox/*/log 15:14:16 ++ echo .tox/tests121/log 15:14:16 ++ awk -F/ '{print $2}' 15:14:16 + tox_env=tests121 15:14:16 + cp -r .tox/tests121/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests121 15:14:16 + for i in .tox/*/log 15:14:16 ++ echo .tox/tests200/log 15:14:16 ++ awk -F/ '{print $2}' 15:14:16 + tox_env=tests200 15:14:16 + cp -r .tox/tests200/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests200 15:14:16 + for i in .tox/*/log 15:14:16 ++ echo .tox/tests221/log 15:14:16 ++ awk -F/ '{print $2}' 15:14:16 + tox_env=tests221 15:14:16 + cp -r .tox/tests221/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests221 15:14:16 + for i in .tox/*/log 15:14:16 ++ echo .tox/tests71/log 15:14:16 ++ awk -F/ '{print $2}' 15:14:16 + tox_env=tests71 15:14:16 + cp -r .tox/tests71/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests71 15:14:16 + for i in .tox/*/log 15:14:16 ++ echo .tox/testsPCE/log 15:14:16 ++ awk -F/ '{print $2}' 15:14:16 + tox_env=testsPCE 15:14:16 + cp -r .tox/testsPCE/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/testsPCE 15:14:16 + for i in .tox/*/log 15:14:16 ++ echo .tox/tests_hybrid/log 15:14:16 ++ awk -F/ '{print $2}' 15:14:16 + tox_env=tests_hybrid 15:14:16 + cp -r .tox/tests_hybrid/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests_hybrid 15:14:16 + for i in .tox/*/log 15:14:16 ++ echo .tox/tests_tapi/log 15:14:16 ++ awk -F/ '{print $2}' 15:14:16 + tox_env=tests_tapi 15:14:16 + cp -r .tox/tests_tapi/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests_tapi 15:14:16 + DOC_DIR=docs/_build/html 15:14:16 + [[ -d docs/_build/html ]] 15:14:16 + echo '---> Archiving generated docs' 15:14:16 ---> Archiving generated docs 15:14:16 + mv docs/_build/html /w/workspace/transportpce-tox-verify-transportpce-master/archives/docs 15:14:16 + echo '---> tox-run.sh ends' 15:14:16 ---> tox-run.sh ends 15:14:16 + test 1 -eq 0 15:14:16 + exit 1 15:14:16 ++ '[' 1 = 1 ']' 15:14:16 ++ '[' -x /usr/bin/clear_console ']' 15:14:16 ++ /usr/bin/clear_console -q 15:14:16 Build step 'Execute shell' marked build as failure 15:14:16 $ ssh-agent -k 15:14:16 unset SSH_AUTH_SOCK; 15:14:16 unset SSH_AGENT_PID; 15:14:16 echo Agent pid 1582 killed; 15:14:16 [ssh-agent] Stopped. 15:14:16 [PostBuildScript] - [INFO] Executing post build scripts. 15:14:16 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins12624393805305871721.sh 15:14:16 ---> sysstat.sh 15:14:17 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins13522266522756275662.sh 15:14:17 ---> package-listing.sh 15:14:17 ++ facter osfamily 15:14:17 ++ tr '[:upper:]' '[:lower:]' 15:14:17 + OS_FAMILY=debian 15:14:17 + workspace=/w/workspace/transportpce-tox-verify-transportpce-master 15:14:17 + START_PACKAGES=/tmp/packages_start.txt 15:14:17 + END_PACKAGES=/tmp/packages_end.txt 15:14:17 + DIFF_PACKAGES=/tmp/packages_diff.txt 15:14:17 + PACKAGES=/tmp/packages_start.txt 15:14:17 + '[' /w/workspace/transportpce-tox-verify-transportpce-master ']' 15:14:17 + PACKAGES=/tmp/packages_end.txt 15:14:17 + case "${OS_FAMILY}" in 15:14:17 + dpkg -l 15:14:17 + grep '^ii' 15:14:17 + '[' -f /tmp/packages_start.txt ']' 15:14:17 + '[' -f /tmp/packages_end.txt ']' 15:14:17 + diff /tmp/packages_start.txt /tmp/packages_end.txt 15:14:17 + '[' /w/workspace/transportpce-tox-verify-transportpce-master ']' 15:14:17 + mkdir -p /w/workspace/transportpce-tox-verify-transportpce-master/archives/ 15:14:17 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/transportpce-tox-verify-transportpce-master/archives/ 15:14:17 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins17511315012145440122.sh 15:14:17 ---> capture-instance-metadata.sh 15:14:17 Setup pyenv: 15:14:17 system 15:14:17 3.8.20 15:14:17 3.9.20 15:14:17 3.10.15 15:14:17 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 15:14:17 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-laWX from file:/tmp/.os_lf_venv 15:14:17 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 15:14:17 lf-activate-venv(): INFO: Attempting to install with network-safe options... 15:14:20 lf-activate-venv(): INFO: Base packages installed successfully 15:14:20 lf-activate-venv(): INFO: Installing additional packages: lftools 15:14:50 lf-activate-venv(): INFO: Adding /tmp/venv-laWX/bin to PATH 15:14:50 INFO: Running in OpenStack, capturing instance metadata 15:14:51 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins12171091017447885619.sh 15:14:51 provisioning config files... 15:14:51 Could not find credentials [logs] for transportpce-tox-verify-transportpce-master #4463 15:14:51 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/transportpce-tox-verify-transportpce-master@tmp/config669474955811199157tmp 15:14:51 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[odl-logs-s3-cloudfront-index] 15:14:51 Run condition [Regular expression match] enabling perform for step [Provide Configuration files] 15:14:51 provisioning config files... 15:14:52 copy managed file [jenkins-s3-log-ship] to file:/home/jenkins/.aws/credentials 15:14:52 [EnvInject] - Injecting environment variables from a build step. 15:14:52 [EnvInject] - Injecting as environment variables the properties content 15:14:52 SERVER_ID=logs 15:14:52 15:14:52 [EnvInject] - Variables injected successfully. 15:14:52 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins6893040966866269081.sh 15:14:52 ---> create-netrc.sh 15:14:52 WARN: Log server credential not found. 15:14:52 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins7464378858390983303.sh 15:14:52 ---> python-tools-install.sh 15:14:52 Setup pyenv: 15:14:52 system 15:14:52 3.8.20 15:14:52 3.9.20 15:14:52 3.10.15 15:14:52 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 15:14:52 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-laWX from file:/tmp/.os_lf_venv 15:14:52 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 15:14:52 lf-activate-venv(): INFO: Attempting to install with network-safe options... 15:14:54 lf-activate-venv(): INFO: Base packages installed successfully 15:14:54 lf-activate-venv(): INFO: Installing additional packages: lftools 15:15:03 lf-activate-venv(): INFO: Adding /tmp/venv-laWX/bin to PATH 15:15:03 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins17081492475146968723.sh 15:15:03 ---> sudo-logs.sh 15:15:03 Archiving 'sudo' log.. 15:15:03 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins10861025734673972913.sh 15:15:03 ---> job-cost.sh 15:15:03 INFO: Activating Python virtual environment... 15:15:03 Setup pyenv: 15:15:03 system 15:15:03 3.8.20 15:15:03 3.9.20 15:15:03 3.10.15 15:15:03 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 15:15:03 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-laWX from file:/tmp/.os_lf_venv 15:15:03 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 15:15:03 lf-activate-venv(): INFO: Attempting to install with network-safe options... 15:15:05 lf-activate-venv(): INFO: Base packages installed successfully 15:15:05 lf-activate-venv(): INFO: Installing additional packages: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 15:15:11 lf-activate-venv(): INFO: Adding /tmp/venv-laWX/bin to PATH 15:15:11 INFO: No stack-cost file found 15:15:11 INFO: Instance uptime: 6893s 15:15:11 INFO: Fetching instance metadata (attempt 1 of 3)... 15:15:11 DEBUG: URL: http://169.254.169.254/latest/meta-data/instance-type 15:15:11 INFO: Successfully fetched instance metadata 15:15:11 INFO: Instance type: v3-standard-4 15:15:11 INFO: Retrieving pricing info for: v3-standard-4 15:15:11 INFO: Fetching Vexxhost pricing API (attempt 1 of 3)... 15:15:11 DEBUG: URL: https://pricing.vexxhost.net/v1/pricing/v3-standard-4/cost?seconds=6893 15:15:12 INFO: Successfully fetched Vexxhost pricing API 15:15:12 INFO: Retrieved cost: 0.22 15:15:12 INFO: Retrieved resource: v3-standard-4 15:15:12 INFO: Creating archive directory: /w/workspace/transportpce-tox-verify-transportpce-master/archives/cost 15:15:12 INFO: Archiving costs to: /w/workspace/transportpce-tox-verify-transportpce-master/archives/cost.csv 15:15:12 INFO: Successfully archived job cost data 15:15:12 DEBUG: Cost data: transportpce-tox-verify-transportpce-master,4463,2026-02-27 15:15:12,v3-standard-4,6893,0.22,0.00,FAILURE 15:15:12 [transportpce-tox-verify-transportpce-master] $ /bin/bash -l /tmp/jenkins4276470664772942877.sh 15:15:12 ---> logs-deploy.sh 15:15:12 Setup pyenv: 15:15:12 system 15:15:12 3.8.20 15:15:12 3.9.20 15:15:12 3.10.15 15:15:12 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 15:15:12 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-laWX from file:/tmp/.os_lf_venv 15:15:12 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 15:15:12 lf-activate-venv(): INFO: Attempting to install with network-safe options... 15:15:14 lf-activate-venv(): INFO: Base packages installed successfully 15:15:14 lf-activate-venv(): INFO: Installing additional packages: lftools urllib3~=1.26.15 15:15:23 lf-activate-venv(): INFO: Adding /tmp/venv-laWX/bin to PATH 15:15:23 WARNING: Nexus logging server not set 15:15:23 INFO: S3 path logs/releng/vex-yul-odl-jenkins-1/transportpce-tox-verify-transportpce-master/4463/ 15:15:23 INFO: archiving logs to S3 15:15:23 /tmp/venv-laWX/lib/python3.11/site-packages/requests/__init__.py:113: RequestsDependencyWarning: urllib3 (1.26.20) or chardet (6.0.0.post1)/charset_normalizer (3.4.4) doesn't match a supported version! 15:15:23 warnings.warn( 15:15:25 ---> uname -a: 15:15:25 Linux prd-ubuntu2204-docker-4c-16g-23837 5.15.0-168-generic #178-Ubuntu SMP Fri Jan 9 19:05:03 UTC 2026 x86_64 x86_64 x86_64 GNU/Linux 15:15:25 15:15:25 15:15:25 ---> lscpu: 15:15:25 Architecture: x86_64 15:15:25 CPU op-mode(s): 32-bit, 64-bit 15:15:25 Address sizes: 40 bits physical, 48 bits virtual 15:15:25 Byte Order: Little Endian 15:15:25 CPU(s): 4 15:15:25 On-line CPU(s) list: 0-3 15:15:25 Vendor ID: AuthenticAMD 15:15:25 Model name: AMD EPYC-Rome Processor 15:15:25 CPU family: 23 15:15:25 Model: 49 15:15:25 Thread(s) per core: 1 15:15:25 Core(s) per socket: 1 15:15:25 Socket(s): 4 15:15:25 Stepping: 0 15:15:25 BogoMIPS: 5599.99 15:15:25 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr wbnoinvd arat npt nrip_save umip rdpid arch_capabilities 15:15:25 Virtualization: AMD-V 15:15:25 Hypervisor vendor: KVM 15:15:25 Virtualization type: full 15:15:25 L1d cache: 128 KiB (4 instances) 15:15:25 L1i cache: 128 KiB (4 instances) 15:15:25 L2 cache: 2 MiB (4 instances) 15:15:25 L3 cache: 64 MiB (4 instances) 15:15:25 NUMA node(s): 1 15:15:25 NUMA node0 CPU(s): 0-3 15:15:25 Vulnerability Gather data sampling: Not affected 15:15:25 Vulnerability Indirect target selection: Not affected 15:15:25 Vulnerability Itlb multihit: Not affected 15:15:25 Vulnerability L1tf: Not affected 15:15:25 Vulnerability Mds: Not affected 15:15:25 Vulnerability Meltdown: Not affected 15:15:25 Vulnerability Mmio stale data: Not affected 15:15:25 Vulnerability Reg file data sampling: Not affected 15:15:25 Vulnerability Retbleed: Mitigation; untrained return thunk; SMT disabled 15:15:25 Vulnerability Spec rstack overflow: Mitigation; SMT disabled 15:15:25 Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp 15:15:25 Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization 15:15:25 Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected 15:15:25 Vulnerability Srbds: Not affected 15:15:25 Vulnerability Tsa: Not affected 15:15:25 Vulnerability Tsx async abort: Not affected 15:15:25 Vulnerability Vmscape: Not affected 15:15:25 15:15:25 15:15:25 ---> nproc: 15:15:25 4 15:15:25 15:15:25 15:15:25 ---> df -h: 15:15:25 Filesystem Size Used Avail Use% Mounted on 15:15:25 tmpfs 1.6G 1.1M 1.6G 1% /run 15:15:25 /dev/vda1 78G 18G 61G 23% / 15:15:25 tmpfs 7.9G 0 7.9G 0% /dev/shm 15:15:25 tmpfs 5.0M 0 5.0M 0% /run/lock 15:15:25 /dev/vda15 105M 6.1M 99M 6% /boot/efi 15:15:25 tmpfs 1.6G 4.0K 1.6G 1% /run/user/1001 15:15:25 15:15:25 15:15:25 ---> free -m: 15:15:25 total used free shared buff/cache available 15:15:25 Mem: 15989 700 11099 4 4189 14946 15:15:25 Swap: 1023 3 1020 15:15:25 15:15:25 15:15:25 ---> ip addr: 15:15:25 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 15:15:25 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 15:15:25 inet 127.0.0.1/8 scope host lo 15:15:25 valid_lft forever preferred_lft forever 15:15:25 inet6 ::1/128 scope host 15:15:25 valid_lft forever preferred_lft forever 15:15:25 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 15:15:25 link/ether fa:16:3e:10:c2:d9 brd ff:ff:ff:ff:ff:ff 15:15:25 altname enp0s3 15:15:25 inet 10.30.171.254/23 metric 100 brd 10.30.171.255 scope global dynamic ens3 15:15:25 valid_lft 79501sec preferred_lft 79501sec 15:15:25 inet6 fe80::f816:3eff:fe10:c2d9/64 scope link 15:15:25 valid_lft forever preferred_lft forever 15:15:25 3: docker0: mtu 1458 qdisc noqueue state DOWN group default 15:15:25 link/ether d6:e1:52:20:af:89 brd ff:ff:ff:ff:ff:ff 15:15:25 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 15:15:25 valid_lft forever preferred_lft forever 15:15:25 15:15:25 15:15:25 ---> sar -b -r -n DEV: 15:15:25 Linux 5.15.0-168-generic (prd-ubuntu2204-docker-4c-16g-23837) 02/27/26 _x86_64_ (4 CPU) 15:15:25 15:15:25 13:20:28 LINUX RESTART (4 CPU) 15:15:25 15:15:25 13:30:18 tps rtps wtps dtps bread/s bwrtn/s bdscd/s 15:15:25 13:40:19 52.58 3.72 32.12 16.74 133.20 3522.68 214160.49 15:15:25 13:50:03 7.40 0.08 6.98 0.34 2.08 199.00 1503.95 15:15:25 14:00:29 12.08 1.90 9.70 0.48 102.24 527.09 1432.88 15:15:25 14:10:29 18.12 0.02 17.30 0.80 0.75 553.03 373.88 15:15:25 14:20:29 7.70 0.01 7.42 0.27 0.95 213.73 130.13 15:15:25 14:30:23 4.90 0.01 4.72 0.17 1.67 138.57 195.93 15:15:25 14:40:29 5.82 0.01 5.64 0.17 1.08 153.35 59.51 15:15:25 14:50:29 5.02 0.03 4.80 0.20 1.27 145.19 57.81 15:15:25 15:00:16 11.16 0.06 10.59 0.51 1.83 598.28 258.08 15:15:25 15:10:23 4.70 0.02 4.53 0.15 0.51 154.22 47.38 15:15:25 Average: 12.96 0.59 10.39 1.98 24.99 621.35 21848.72 15:15:25 15:15:25 13:30:18 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 15:15:25 13:40:19 7166688 10126664 5811288 35.49 263528 2691680 6829744 39.20 1918836 6729280 248 15:15:25 13:50:03 9140824 12114816 3824192 23.36 264648 2704584 6112068 35.08 1937460 4754940 6436 15:15:25 14:00:29 9253564 12354652 3584212 21.89 270180 2826112 4402680 25.27 1952844 4627624 192 15:15:25 14:10:29 9752992 12937544 3001468 18.33 272508 2907260 3670236 21.07 1977100 4104648 124 15:15:25 14:20:29 8045152 11259392 4678668 28.58 273472 2935984 5325448 30.57 1979204 5803896 180 15:15:25 14:30:23 8216356 11450488 4487460 27.41 274184 2955164 5153080 29.58 1980932 5630556 64 15:15:25 14:40:29 6579532 9833968 6103144 37.28 275100 2974548 6771804 38.87 1983300 7259492 76 15:15:25 14:50:29 6683196 9959412 5977740 36.51 275872 2995560 6647468 38.16 1987084 7143788 72 15:15:25 15:00:16 9274340 12694944 3244024 19.81 279800 3135072 4254192 24.42 2026464 4532508 23120 15:15:25 15:10:23 9412580 12859596 3079460 18.81 280200 3161052 4182768 24.01 2037904 4384016 24796 15:15:25 Average: 8352522 11559148 4379166 26.75 272949 2928702 5334949 30.62 1978113 5497075 5531 15:15:25 15:15:25 13:30:18 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 15:15:25 13:40:19 lo 19.28 19.28 15.49 15.49 0.00 0.00 0.00 0.00 15:15:25 13:40:19 ens3 1.78 1.20 0.36 1.21 0.00 0.00 0.00 0.00 15:15:25 13:40:19 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:15:25 13:50:03 lo 14.06 14.06 6.66 6.66 0.00 0.00 0.00 0.00 15:15:25 13:50:03 ens3 0.63 0.49 0.15 0.12 0.00 0.00 0.00 0.00 15:15:25 13:50:03 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:15:25 14:00:29 lo 12.07 12.07 7.06 7.06 0.00 0.00 0.00 0.00 15:15:25 14:00:29 ens3 0.73 0.96 0.20 1.01 0.00 0.00 0.00 0.00 15:15:25 14:00:29 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:15:25 14:10:29 lo 13.21 13.21 7.78 7.78 0.00 0.00 0.00 0.00 15:15:25 14:10:29 ens3 0.89 0.78 0.20 0.17 0.00 0.00 0.00 0.00 15:15:25 14:10:29 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:15:25 14:20:29 lo 19.85 19.85 9.39 9.39 0.00 0.00 0.00 0.00 15:15:25 14:20:29 ens3 0.70 0.66 0.15 0.13 0.00 0.00 0.00 0.00 15:15:25 14:20:29 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:15:25 14:30:23 lo 26.97 26.97 9.35 9.35 0.00 0.00 0.00 0.00 15:15:25 14:30:23 ens3 0.45 0.36 0.10 0.07 0.00 0.00 0.00 0.00 15:15:25 14:30:23 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:15:25 14:40:29 lo 16.45 16.45 8.36 8.36 0.00 0.00 0.00 0.00 15:15:25 14:40:29 ens3 0.81 0.52 0.20 0.14 0.00 0.00 0.00 0.00 15:15:25 14:40:29 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:15:25 14:50:29 lo 11.27 11.27 7.19 7.19 0.00 0.00 0.00 0.00 15:15:25 14:50:29 ens3 0.56 0.42 0.16 0.12 0.00 0.00 0.00 0.00 15:15:25 14:50:29 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:15:25 15:00:16 lo 18.19 18.19 10.15 10.15 0.00 0.00 0.00 0.00 15:15:25 15:00:16 ens3 0.91 0.79 0.27 0.22 0.00 0.00 0.00 0.00 15:15:25 15:00:16 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:15:25 15:10:23 lo 16.74 16.74 6.83 6.83 0.00 0.00 0.00 0.00 15:15:25 15:10:23 ens3 0.74 0.55 0.21 0.16 0.00 0.00 0.00 0.00 15:15:25 15:10:23 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:15:25 Average: lo 16.78 16.78 8.82 8.82 0.00 0.00 0.00 0.00 15:15:25 Average: ens3 0.82 0.67 0.20 0.34 0.00 0.00 0.00 0.00 15:15:25 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:15:25 15:15:25 15:15:25 ---> sar -P ALL: 15:15:25 Linux 5.15.0-168-generic (prd-ubuntu2204-docker-4c-16g-23837) 02/27/26 _x86_64_ (4 CPU) 15:15:25 15:15:25 13:20:28 LINUX RESTART (4 CPU) 15:15:25 15:15:25 13:30:18 CPU %user %nice %system %iowait %steal %idle 15:15:25 13:40:19 all 35.24 0.00 1.44 0.28 0.09 62.95 15:15:25 13:40:19 0 35.17 0.00 1.51 0.18 0.09 63.06 15:15:25 13:40:19 1 34.68 0.00 1.59 0.30 0.09 63.34 15:15:25 13:40:19 2 35.57 0.00 1.30 0.50 0.09 62.54 15:15:25 13:40:19 3 35.55 0.00 1.36 0.15 0.09 62.85 15:15:25 13:50:03 all 13.52 0.00 0.54 0.03 0.06 85.85 15:15:25 13:50:03 0 13.26 0.00 0.53 0.01 0.06 86.14 15:15:25 13:50:03 1 13.75 0.00 0.53 0.04 0.07 85.61 15:15:25 13:50:03 2 14.04 0.00 0.59 0.04 0.07 85.25 15:15:25 13:50:03 3 13.03 0.00 0.51 0.02 0.06 86.38 15:15:25 14:00:29 all 15.91 0.00 0.60 0.07 0.08 83.34 15:15:25 14:00:29 0 15.48 0.00 0.51 0.04 0.07 83.90 15:15:25 14:00:29 1 16.78 0.00 0.81 0.10 0.08 82.22 15:15:25 14:00:29 2 15.46 0.00 0.54 0.05 0.07 83.87 15:15:25 14:00:29 3 15.94 0.00 0.53 0.09 0.08 83.36 15:15:25 14:10:29 all 27.86 0.00 0.97 0.48 0.09 70.61 15:15:25 14:10:29 0 26.63 0.00 1.10 0.51 0.09 71.67 15:15:25 14:10:29 1 27.54 0.00 0.82 0.03 0.09 71.53 15:15:25 14:10:29 2 29.12 0.00 0.94 0.63 0.09 69.22 15:15:25 14:10:29 3 28.14 0.00 1.02 0.73 0.08 70.03 15:15:25 14:20:29 all 13.91 0.00 0.52 0.09 0.07 85.41 15:15:25 14:20:29 0 13.89 0.00 0.49 0.01 0.07 85.55 15:15:25 14:20:29 1 14.33 0.00 0.49 0.06 0.07 85.05 15:15:25 14:20:29 2 13.30 0.00 0.68 0.25 0.07 85.70 15:15:25 14:20:29 3 14.13 0.00 0.45 0.03 0.07 85.32 15:15:25 14:30:23 all 8.38 0.00 0.37 0.02 0.07 91.16 15:15:25 14:30:23 0 8.68 0.00 0.34 0.01 0.06 90.92 15:15:25 14:30:23 1 8.59 0.00 0.42 0.02 0.07 90.89 15:15:25 14:30:23 2 8.24 0.00 0.35 0.03 0.06 91.32 15:15:25 14:30:23 3 8.00 0.00 0.38 0.04 0.07 91.51 15:15:25 14:40:29 all 9.62 0.00 0.43 0.06 0.07 89.82 15:15:25 14:40:29 0 9.47 0.00 0.39 0.07 0.06 90.01 15:15:25 14:40:29 1 9.97 0.00 0.41 0.05 0.07 89.51 15:15:25 14:40:29 2 9.94 0.00 0.44 0.05 0.08 89.49 15:15:25 14:40:29 3 9.10 0.00 0.48 0.09 0.07 90.27 15:15:25 14:50:29 all 9.71 0.00 0.50 0.03 0.07 89.68 15:15:25 14:50:29 0 9.86 0.00 0.46 0.09 0.06 89.54 15:15:25 14:50:29 1 9.61 0.00 0.52 0.01 0.07 89.80 15:15:25 14:50:29 2 10.14 0.00 0.53 0.02 0.07 89.23 15:15:25 14:50:29 3 9.25 0.00 0.51 0.01 0.07 90.16 15:15:25 15:00:16 all 17.48 0.00 0.70 0.04 0.07 81.71 15:15:25 15:00:16 0 17.56 0.00 0.64 0.01 0.07 81.71 15:15:25 15:00:16 1 18.07 0.00 0.78 0.06 0.07 81.02 15:15:25 15:00:16 2 16.90 0.00 0.68 0.05 0.08 82.29 15:15:25 15:00:16 3 17.39 0.00 0.67 0.03 0.07 81.83 15:15:25 15:10:23 all 7.63 0.00 0.41 0.03 0.07 91.87 15:15:25 15:10:23 0 7.82 0.00 0.40 0.01 0.06 91.71 15:15:25 15:10:23 1 7.59 0.00 0.40 0.07 0.07 91.87 15:15:25 15:10:23 2 7.82 0.00 0.46 0.02 0.06 91.63 15:15:25 15:10:23 3 7.26 0.00 0.38 0.02 0.07 92.27 15:15:25 Average: all 15.93 0.00 0.65 0.11 0.07 83.24 15:15:25 Average: 0 15.78 0.00 0.64 0.09 0.07 83.42 15:15:25 Average: 1 16.09 0.00 0.68 0.07 0.08 83.08 15:15:25 Average: 2 16.05 0.00 0.65 0.17 0.07 83.06 15:15:25 Average: 3 15.79 0.00 0.63 0.12 0.07 83.39 15:15:25 15:15:25 15:15:25