11:41:03 Triggered by Gerrit: https://git.opendaylight.org/gerrit/c/transportpce/+/120829 11:41:03 Running as SYSTEM 11:41:03 [EnvInject] - Loading node environment variables. 11:41:03 Building remotely on prd-ubuntu2204-docker-4c-16g-58912 (ubuntu2204-docker-4c-16g) in workspace /w/workspace/transportpce-tox-verify-transportpce-master 11:41:04 [ssh-agent] Looking for ssh-agent implementation... 11:41:04 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 11:41:04 $ ssh-agent 11:41:04 SSH_AUTH_SOCK=/tmp/ssh-XXXXXXBKDGMo/agent.1563 11:41:04 SSH_AGENT_PID=1565 11:41:04 [ssh-agent] Started. 11:41:04 Running ssh-add (command line suppressed) 11:41:04 Identity added: /w/workspace/transportpce-tox-verify-transportpce-master@tmp/private_key_16349847262812033363.key (/w/workspace/transportpce-tox-verify-transportpce-master@tmp/private_key_16349847262812033363.key) 11:41:04 [ssh-agent] Using credentials jenkins (jenkins-ssh) 11:41:04 The recommended git tool is: NONE 11:41:05 using credential jenkins-ssh 11:41:05 Wiping out workspace first. 11:41:05 Cloning the remote Git repository 11:41:06 Cloning repository git://devvexx.opendaylight.org/mirror/transportpce 11:41:06 > git init /w/workspace/transportpce-tox-verify-transportpce-master # timeout=10 11:41:06 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/transportpce 11:41:06 > git --version # timeout=10 11:41:06 > git --version # 'git version 2.34.1' 11:41:06 using GIT_SSH to set credentials jenkins-ssh 11:41:06 Verifying host key using known hosts file, will automatically accept unseen keys 11:41:06 > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/transportpce +refs/heads/*:refs/remotes/origin/* # timeout=10 11:41:10 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/transportpce # timeout=10 11:41:10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 11:41:10 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/transportpce # timeout=10 11:41:10 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/transportpce 11:41:10 using GIT_SSH to set credentials jenkins-ssh 11:41:10 Verifying host key using known hosts file, will automatically accept unseen keys 11:41:10 > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/transportpce refs/changes/29/120829/7 # timeout=10 11:41:10 > git rev-parse a3f37a54696bc8463e100deba8a1c2fa94f13495^{commit} # timeout=10 11:41:10 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script 11:41:10 Checking out Revision a3f37a54696bc8463e100deba8a1c2fa94f13495 (refs/changes/29/120829/7) 11:41:10 > git config core.sparsecheckout # timeout=10 11:41:10 > git checkout -f a3f37a54696bc8463e100deba8a1c2fa94f13495 # timeout=10 11:41:11 Commit message: "Support for openconfig 2.0" 11:41:11 > git rev-parse FETCH_HEAD^{commit} # timeout=10 11:41:11 > git rev-list --no-walk 509d781065379100eb9da8d0414bc0043a05ebc0 # timeout=10 11:41:11 > git remote # timeout=10 11:41:11 > git submodule init # timeout=10 11:41:11 > git submodule sync # timeout=10 11:41:11 > git config --get remote.origin.url # timeout=10 11:41:11 > git submodule init # timeout=10 11:41:11 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 11:41:11 ERROR: No submodules found. 11:41:14 provisioning config files... 11:41:14 copy managed file [npmrc] to file:/home/jenkins/.npmrc 11:41:14 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 11:41:14 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins16729747966276941678.sh 11:41:14 ---> python-tools-install.sh 11:41:14 Setup pyenv: 11:41:14 * system (set by /opt/pyenv/version) 11:41:14 * 3.8.20 (set by /opt/pyenv/version) 11:41:14 * 3.9.20 (set by /opt/pyenv/version) 11:41:14 3.10.15 11:41:14 3.11.10 11:41:19 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-xUmd 11:41:19 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 11:41:19 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 11:41:19 lf-activate-venv(): INFO: Attempting to install with network-safe options... 11:41:23 lf-activate-venv(): INFO: Base packages installed successfully 11:41:23 lf-activate-venv(): INFO: Installing additional packages: lftools 11:41:50 lf-activate-venv(): INFO: Adding /tmp/venv-xUmd/bin to PATH 11:41:50 Generating Requirements File 11:42:10 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. 11:42:10 httplib2 0.30.2 requires pyparsing<4,>=3.0.4, but you have pyparsing 2.4.7 which is incompatible. 11:42:10 Python 3.11.10 11:42:10 pip 26.0.1 from /tmp/venv-xUmd/lib/python3.11/site-packages/pip (python 3.11) 11:42:10 appdirs==1.4.4 11:42:10 argcomplete==3.6.3 11:42:10 aspy.yaml==1.3.0 11:42:10 attrs==25.4.0 11:42:10 autopage==0.6.0 11:42:10 beautifulsoup4==4.14.3 11:42:10 boto3==1.42.61 11:42:10 botocore==1.42.61 11:42:10 bs4==0.0.2 11:42:10 certifi==2026.2.25 11:42:10 cffi==2.0.0 11:42:10 cfgv==3.5.0 11:42:10 chardet==7.0.1 11:42:10 charset-normalizer==3.4.4 11:42:10 click==8.3.1 11:42:10 cliff==4.13.2 11:42:10 cmd2==3.4.0 11:42:10 cryptography==3.3.2 11:42:10 debtcollector==3.0.0 11:42:10 decorator==5.2.1 11:42:10 defusedxml==0.7.1 11:42:10 Deprecated==1.3.1 11:42:10 distlib==0.4.0 11:42:10 dnspython==2.8.0 11:42:10 docker==7.1.0 11:42:10 dogpile.cache==1.5.0 11:42:10 durationpy==0.10 11:42:10 email-validator==2.3.0 11:42:10 filelock==3.25.0 11:42:10 future==1.0.0 11:42:10 gitdb==4.0.12 11:42:10 GitPython==3.1.46 11:42:10 httplib2==0.30.2 11:42:10 identify==2.6.17 11:42:10 idna==3.11 11:42:10 importlib-resources==1.5.0 11:42:10 iso8601==2.1.0 11:42:10 Jinja2==3.1.6 11:42:10 jmespath==1.1.0 11:42:10 jsonpatch==1.33 11:42:10 jsonpointer==3.0.0 11:42:10 jsonschema==4.26.0 11:42:10 jsonschema-specifications==2025.9.1 11:42:10 keystoneauth1==5.13.1 11:42:10 kubernetes==35.0.0 11:42:10 lftools==0.37.22 11:42:10 lxml==6.0.2 11:42:10 markdown-it-py==4.0.0 11:42:10 MarkupSafe==3.0.3 11:42:10 mdurl==0.1.2 11:42:10 msgpack==1.1.2 11:42:10 multi_key_dict==2.0.3 11:42:10 munch==4.0.0 11:42:10 netaddr==1.3.0 11:42:10 niet==1.4.2 11:42:10 nodeenv==1.10.0 11:42:10 oauth2client==4.1.3 11:42:10 oauthlib==3.3.1 11:42:10 openstacksdk==4.10.0 11:42:10 os-service-types==1.8.2 11:42:10 osc-lib==4.4.0 11:42:10 oslo.config==10.3.0 11:42:10 oslo.context==6.3.0 11:42:10 oslo.i18n==6.7.2 11:42:10 oslo.log==8.1.0 11:42:10 oslo.serialization==5.9.1 11:42:10 oslo.utils==10.0.0 11:42:10 packaging==26.0 11:42:10 pbr==7.0.3 11:42:10 platformdirs==4.9.2 11:42:10 prettytable==3.17.0 11:42:10 psutil==7.2.2 11:42:10 pyasn1==0.6.2 11:42:10 pyasn1_modules==0.4.2 11:42:10 pycparser==3.0 11:42:10 pygerrit2==2.0.15 11:42:10 PyGithub==2.8.1 11:42:10 Pygments==2.19.2 11:42:10 PyJWT==2.11.0 11:42:10 PyNaCl==1.6.2 11:42:10 pyparsing==2.4.7 11:42:10 pyperclip==1.11.0 11:42:10 pyrsistent==0.20.0 11:42:10 python-cinderclient==9.9.0 11:42:10 python-dateutil==2.9.0.post0 11:42:10 python-discovery==1.1.0 11:42:10 python-heatclient==5.1.0 11:42:10 python-jenkins==1.8.3 11:42:10 python-keystoneclient==5.8.0 11:42:10 python-magnumclient==4.10.0 11:42:10 python-openstackclient==9.0.0 11:42:10 python-swiftclient==4.10.0 11:42:10 PyYAML==6.0.3 11:42:10 referencing==0.37.0 11:42:10 requests==2.32.5 11:42:10 requests-oauthlib==2.0.0 11:42:10 requestsexceptions==1.4.0 11:42:10 rfc3986==2.0.0 11:42:10 rich==14.3.3 11:42:10 rich-argparse==1.7.2 11:42:10 rpds-py==0.30.0 11:42:10 rsa==4.9.1 11:42:10 ruamel.yaml==0.19.1 11:42:10 ruamel.yaml.clib==0.2.15 11:42:10 s3transfer==0.16.0 11:42:10 simplejson==3.20.2 11:42:10 six==1.17.0 11:42:10 smmap==5.0.2 11:42:10 soupsieve==2.8.3 11:42:10 stevedore==5.7.0 11:42:10 tabulate==0.10.0 11:42:10 toml==0.10.2 11:42:10 tomlkit==0.14.0 11:42:10 tqdm==4.67.3 11:42:10 typing_extensions==4.15.0 11:42:10 urllib3==1.26.20 11:42:10 virtualenv==21.1.0 11:42:10 wcwidth==0.6.0 11:42:10 websocket-client==1.9.0 11:42:10 wrapt==2.1.1 11:42:10 xdg==6.0.0 11:42:10 xmltodict==1.0.4 11:42:10 yq==3.4.3 11:42:10 [EnvInject] - Injecting environment variables from a build step. 11:42:10 [EnvInject] - Injecting as environment variables the properties content 11:42:10 PYTHON=python3 11:42:10 11:42:10 [EnvInject] - Variables injected successfully. 11:42:10 [transportpce-tox-verify-transportpce-master] $ /bin/bash -l /tmp/jenkins5167336856844452812.sh 11:42:10 ---> tox-install.sh 11:42:10 + source /home/jenkins/lf-env.sh 11:42:10 + lf-activate-venv --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 11:42:10 ++ mktemp -d /tmp/venv-XXXX 11:42:10 + lf_venv=/tmp/venv-miFu 11:42:10 + local venv_file=/tmp/.os_lf_venv 11:42:10 + local python=python3 11:42:10 + local options 11:42:10 + local set_path=true 11:42:10 + local install_args= 11:42:10 ++ getopt -o np:v: -l no-path,system-site-packages,python:,venv-file: -n lf-activate-venv -- --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 11:42:10 + options=' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 11:42:10 + eval set -- ' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 11:42:10 ++ set -- --venv-file /tmp/.toxenv -- tox virtualenv urllib3~=1.26.15 11:42:10 + true 11:42:10 + case $1 in 11:42:10 + venv_file=/tmp/.toxenv 11:42:10 + shift 2 11:42:10 + true 11:42:10 + case $1 in 11:42:10 + shift 11:42:10 + break 11:42:10 + case $python in 11:42:10 + local pkg_list= 11:42:10 + [[ -d /opt/pyenv ]] 11:42:10 + echo 'Setup pyenv:' 11:42:10 Setup pyenv: 11:42:10 + export PYENV_ROOT=/opt/pyenv 11:42:10 + PYENV_ROOT=/opt/pyenv 11:42:10 + export PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 11:42:10 + PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 11:42:10 + pyenv versions 11:42:11 system 11:42:11 3.8.20 11:42:11 3.9.20 11:42:11 3.10.15 11:42:11 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 11:42:11 + command -v pyenv 11:42:11 ++ pyenv init - --no-rehash 11:42:11 + eval 'PATH="$(bash --norc -ec '\''IFS=:; paths=($PATH); 11:42:11 for i in ${!paths[@]}; do 11:42:11 if [[ ${paths[i]} == "'\'''\''/opt/pyenv/shims'\'''\''" ]]; then unset '\''\'\'''\''paths[i]'\''\'\'''\''; 11:42:11 fi; done; 11:42:11 echo "${paths[*]}"'\'')" 11:42:11 export PATH="/opt/pyenv/shims:${PATH}" 11:42:11 export PYENV_SHELL=bash 11:42:11 source '\''/opt/pyenv/libexec/../completions/pyenv.bash'\'' 11:42:11 pyenv() { 11:42:11 local command 11:42:11 command="${1:-}" 11:42:11 if [ "$#" -gt 0 ]; then 11:42:11 shift 11:42:11 fi 11:42:11 11:42:11 case "$command" in 11:42:11 rehash|shell) 11:42:11 eval "$(pyenv "sh-$command" "$@")" 11:42:11 ;; 11:42:11 *) 11:42:11 command pyenv "$command" "$@" 11:42:11 ;; 11:42:11 esac 11:42:11 }' 11:42:11 +++ bash --norc -ec 'IFS=:; paths=($PATH); 11:42:11 for i in ${!paths[@]}; do 11:42:11 if [[ ${paths[i]} == "/opt/pyenv/shims" ]]; then unset '\''paths[i]'\''; 11:42:11 fi; done; 11:42:11 echo "${paths[*]}"' 11:42:11 ++ PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 11:42:11 ++ export PATH=/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 11:42:11 ++ PATH=/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 11:42:11 ++ export PYENV_SHELL=bash 11:42:11 ++ PYENV_SHELL=bash 11:42:11 ++ source /opt/pyenv/libexec/../completions/pyenv.bash 11:42:11 +++ complete -F _pyenv pyenv 11:42:11 ++ lf-pyver python3 11:42:11 ++ local py_version_xy=python3 11:42:11 ++ local py_version_xyz= 11:42:11 ++ pyenv versions 11:42:11 ++ local command 11:42:11 ++ sed 's/^[ *]* //' 11:42:11 ++ command=versions 11:42:11 ++ '[' 1 -gt 0 ']' 11:42:11 ++ shift 11:42:11 ++ case "$command" in 11:42:11 ++ command pyenv versions 11:42:11 ++ awk '{ print $1 }' 11:42:11 ++ grep -E '^[0-9.]*[0-9]$' 11:42:11 ++ [[ ! -s /tmp/.pyenv_versions ]] 11:42:11 +++ grep '^3' /tmp/.pyenv_versions 11:42:11 +++ sort -V 11:42:11 +++ tail -n 1 11:42:11 ++ py_version_xyz=3.11.10 11:42:11 ++ [[ -z 3.11.10 ]] 11:42:11 ++ echo 3.11.10 11:42:11 ++ return 0 11:42:11 + pyenv local 3.11.10 11:42:11 + local command 11:42:11 + command=local 11:42:11 + '[' 2 -gt 0 ']' 11:42:11 + shift 11:42:11 + case "$command" in 11:42:11 + command pyenv local 3.11.10 11:42:11 + for arg in "$@" 11:42:11 + case $arg in 11:42:11 + pkg_list+='tox ' 11:42:11 + for arg in "$@" 11:42:11 + case $arg in 11:42:11 + pkg_list+='virtualenv ' 11:42:11 + for arg in "$@" 11:42:11 + case $arg in 11:42:11 + pkg_list+='urllib3~=1.26.15 ' 11:42:11 + [[ -f /tmp/.toxenv ]] 11:42:11 + [[ ! -f /tmp/.toxenv ]] 11:42:11 + [[ -n '' ]] 11:42:11 + python3 -m venv /tmp/venv-miFu 11:42:15 + echo 'lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-miFu' 11:42:15 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-miFu 11:42:15 + echo /tmp/venv-miFu 11:42:15 + echo 'lf-activate-venv(): INFO: Save venv in file: /tmp/.toxenv' 11:42:15 lf-activate-venv(): INFO: Save venv in file: /tmp/.toxenv 11:42:15 + echo 'lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv)' 11:42:15 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 11:42:15 + local 'pip_opts=--upgrade --quiet' 11:42:15 + pip_opts='--upgrade --quiet --trusted-host pypi.org' 11:42:15 + pip_opts='--upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org' 11:42:15 + pip_opts='--upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org --trusted-host pypi.python.org' 11:42:15 + [[ -n '' ]] 11:42:15 + [[ -n '' ]] 11:42:15 + echo 'lf-activate-venv(): INFO: Attempting to install with network-safe options...' 11:42:15 lf-activate-venv(): INFO: Attempting to install with network-safe options... 11:42:15 + /tmp/venv-miFu/bin/python3 -m pip install --upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org --trusted-host pypi.python.org pip 'setuptools<66' virtualenv 11:42:19 + echo 'lf-activate-venv(): INFO: Base packages installed successfully' 11:42:19 lf-activate-venv(): INFO: Base packages installed successfully 11:42:19 + [[ -z tox virtualenv urllib3~=1.26.15 ]] 11:42:19 + echo 'lf-activate-venv(): INFO: Installing additional packages: tox virtualenv urllib3~=1.26.15 ' 11:42:19 lf-activate-venv(): INFO: Installing additional packages: tox virtualenv urllib3~=1.26.15 11:42:19 + /tmp/venv-miFu/bin/python3 -m pip install --upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org --trusted-host pypi.python.org --upgrade-strategy eager tox virtualenv urllib3~=1.26.15 11:42:21 + type python3 11:42:21 + true 11:42:21 + echo 'lf-activate-venv(): INFO: Adding /tmp/venv-miFu/bin to PATH' 11:42:21 lf-activate-venv(): INFO: Adding /tmp/venv-miFu/bin to PATH 11:42:21 + PATH=/tmp/venv-miFu/bin:/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 11:42:21 + return 0 11:42:21 + python3 --version 11:42:21 Python 3.11.10 11:42:21 + python3 -m pip --version 11:42:21 pip 26.0.1 from /tmp/venv-miFu/lib/python3.11/site-packages/pip (python 3.11) 11:42:21 + python3 -m pip freeze 11:42:21 cachetools==7.0.2 11:42:21 colorama==0.4.6 11:42:21 distlib==0.4.0 11:42:21 filelock==3.25.0 11:42:21 packaging==26.0 11:42:21 platformdirs==4.9.2 11:42:21 pluggy==1.6.0 11:42:21 pyproject-api==1.10.0 11:42:21 python-discovery==1.1.0 11:42:21 tox==4.47.3 11:42:21 urllib3==1.26.20 11:42:21 virtualenv==21.1.0 11:42:21 [transportpce-tox-verify-transportpce-master] $ /bin/sh -xe /tmp/jenkins14978504901578667612.sh 11:42:21 [EnvInject] - Injecting environment variables from a build step. 11:42:21 [EnvInject] - Injecting as environment variables the properties content 11:42:21 PARALLEL=True 11:42:21 11:42:21 [EnvInject] - Variables injected successfully. 11:42:21 [transportpce-tox-verify-transportpce-master] $ /bin/bash -l /tmp/jenkins2507584019031607672.sh 11:42:21 ---> tox-run.sh 11:42:21 + PATH=/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 11:42:21 + ARCHIVE_TOX_DIR=/w/workspace/transportpce-tox-verify-transportpce-master/archives/tox 11:42:21 + ARCHIVE_DOC_DIR=/w/workspace/transportpce-tox-verify-transportpce-master/archives/docs 11:42:21 + mkdir -p /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox 11:42:21 + cd /w/workspace/transportpce-tox-verify-transportpce-master/. 11:42:21 + source /home/jenkins/lf-env.sh 11:42:21 + lf-activate-venv --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 11:42:21 ++ mktemp -d /tmp/venv-XXXX 11:42:21 + lf_venv=/tmp/venv-jsC0 11:42:21 + local venv_file=/tmp/.os_lf_venv 11:42:21 + local python=python3 11:42:21 + local options 11:42:21 + local set_path=true 11:42:21 + local install_args= 11:42:21 ++ getopt -o np:v: -l no-path,system-site-packages,python:,venv-file: -n lf-activate-venv -- --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 11:42:21 + options=' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 11:42:21 + eval set -- ' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 11:42:21 ++ set -- --venv-file /tmp/.toxenv -- tox virtualenv urllib3~=1.26.15 11:42:21 + true 11:42:21 + case $1 in 11:42:21 + venv_file=/tmp/.toxenv 11:42:21 + shift 2 11:42:21 + true 11:42:21 + case $1 in 11:42:21 + shift 11:42:21 + break 11:42:21 + case $python in 11:42:21 + local pkg_list= 11:42:21 + [[ -d /opt/pyenv ]] 11:42:21 + echo 'Setup pyenv:' 11:42:21 Setup pyenv: 11:42:21 + export PYENV_ROOT=/opt/pyenv 11:42:21 + PYENV_ROOT=/opt/pyenv 11:42:21 + export PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 11:42:21 + PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 11:42:21 + pyenv versions 11:42:21 system 11:42:21 3.8.20 11:42:21 3.9.20 11:42:21 3.10.15 11:42:21 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 11:42:21 + command -v pyenv 11:42:21 ++ pyenv init - --no-rehash 11:42:21 + eval 'PATH="$(bash --norc -ec '\''IFS=:; paths=($PATH); 11:42:21 for i in ${!paths[@]}; do 11:42:21 if [[ ${paths[i]} == "'\'''\''/opt/pyenv/shims'\'''\''" ]]; then unset '\''\'\'''\''paths[i]'\''\'\'''\''; 11:42:21 fi; done; 11:42:21 echo "${paths[*]}"'\'')" 11:42:21 export PATH="/opt/pyenv/shims:${PATH}" 11:42:21 export PYENV_SHELL=bash 11:42:21 source '\''/opt/pyenv/libexec/../completions/pyenv.bash'\'' 11:42:21 pyenv() { 11:42:21 local command 11:42:21 command="${1:-}" 11:42:21 if [ "$#" -gt 0 ]; then 11:42:21 shift 11:42:21 fi 11:42:21 11:42:21 case "$command" in 11:42:21 rehash|shell) 11:42:21 eval "$(pyenv "sh-$command" "$@")" 11:42:21 ;; 11:42:21 *) 11:42:21 command pyenv "$command" "$@" 11:42:21 ;; 11:42:21 esac 11:42:21 }' 11:42:21 +++ bash --norc -ec 'IFS=:; paths=($PATH); 11:42:21 for i in ${!paths[@]}; do 11:42:21 if [[ ${paths[i]} == "/opt/pyenv/shims" ]]; then unset '\''paths[i]'\''; 11:42:21 fi; done; 11:42:21 echo "${paths[*]}"' 11:42:21 ++ PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 11:42:21 ++ export PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 11:42:21 ++ PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 11:42:21 ++ export PYENV_SHELL=bash 11:42:21 ++ PYENV_SHELL=bash 11:42:21 ++ source /opt/pyenv/libexec/../completions/pyenv.bash 11:42:21 +++ complete -F _pyenv pyenv 11:42:21 ++ lf-pyver python3 11:42:21 ++ local py_version_xy=python3 11:42:21 ++ local py_version_xyz= 11:42:21 ++ pyenv versions 11:42:21 ++ local command 11:42:21 ++ command=versions 11:42:21 ++ '[' 1 -gt 0 ']' 11:42:21 ++ sed 's/^[ *]* //' 11:42:21 ++ shift 11:42:21 ++ case "$command" in 11:42:21 ++ command pyenv versions 11:42:21 ++ awk '{ print $1 }' 11:42:21 ++ grep -E '^[0-9.]*[0-9]$' 11:42:21 ++ [[ ! -s /tmp/.pyenv_versions ]] 11:42:21 +++ grep '^3' /tmp/.pyenv_versions 11:42:21 +++ sort -V 11:42:21 +++ tail -n 1 11:42:21 ++ py_version_xyz=3.11.10 11:42:21 ++ [[ -z 3.11.10 ]] 11:42:21 ++ echo 3.11.10 11:42:21 ++ return 0 11:42:21 + pyenv local 3.11.10 11:42:21 + local command 11:42:21 + command=local 11:42:21 + '[' 2 -gt 0 ']' 11:42:21 + shift 11:42:21 + case "$command" in 11:42:21 + command pyenv local 3.11.10 11:42:21 + for arg in "$@" 11:42:21 + case $arg in 11:42:21 + pkg_list+='tox ' 11:42:21 + for arg in "$@" 11:42:21 + case $arg in 11:42:21 + pkg_list+='virtualenv ' 11:42:21 + for arg in "$@" 11:42:21 + case $arg in 11:42:21 + pkg_list+='urllib3~=1.26.15 ' 11:42:21 + [[ -f /tmp/.toxenv ]] 11:42:21 ++ cat /tmp/.toxenv 11:42:21 + lf_venv=/tmp/venv-miFu 11:42:21 + echo 'lf-activate-venv(): INFO: Reuse venv:/tmp/venv-miFu from' file:/tmp/.toxenv 11:42:21 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-miFu from file:/tmp/.toxenv 11:42:21 + echo 'lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv)' 11:42:21 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 11:42:21 + local 'pip_opts=--upgrade --quiet' 11:42:21 + pip_opts='--upgrade --quiet --trusted-host pypi.org' 11:42:21 + pip_opts='--upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org' 11:42:21 + pip_opts='--upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org --trusted-host pypi.python.org' 11:42:21 + [[ -n '' ]] 11:42:21 + [[ -n '' ]] 11:42:21 + echo 'lf-activate-venv(): INFO: Attempting to install with network-safe options...' 11:42:21 lf-activate-venv(): INFO: Attempting to install with network-safe options... 11:42:21 + /tmp/venv-miFu/bin/python3 -m pip install --upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org --trusted-host pypi.python.org pip 'setuptools<66' virtualenv 11:42:22 + echo 'lf-activate-venv(): INFO: Base packages installed successfully' 11:42:22 lf-activate-venv(): INFO: Base packages installed successfully 11:42:22 + [[ -z tox virtualenv urllib3~=1.26.15 ]] 11:42:22 + echo 'lf-activate-venv(): INFO: Installing additional packages: tox virtualenv urllib3~=1.26.15 ' 11:42:22 lf-activate-venv(): INFO: Installing additional packages: tox virtualenv urllib3~=1.26.15 11:42:22 + /tmp/venv-miFu/bin/python3 -m pip install --upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org --trusted-host pypi.python.org --upgrade-strategy eager tox virtualenv urllib3~=1.26.15 11:42:24 + type python3 11:42:24 + true 11:42:24 + echo 'lf-activate-venv(): INFO: Adding /tmp/venv-miFu/bin to PATH' 11:42:24 lf-activate-venv(): INFO: Adding /tmp/venv-miFu/bin to PATH 11:42:24 + PATH=/tmp/venv-miFu/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 11:42:24 + return 0 11:42:24 + [[ -d /opt/pyenv ]] 11:42:24 + echo '---> Setting up pyenv' 11:42:24 ---> Setting up pyenv 11:42:24 + export PYENV_ROOT=/opt/pyenv 11:42:24 + PYENV_ROOT=/opt/pyenv 11:42:24 + export PATH=/opt/pyenv/bin:/tmp/venv-miFu/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 11:42:24 + PATH=/opt/pyenv/bin:/tmp/venv-miFu/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 11:42:24 ++ pwd 11:42:24 + PYTHONPATH=/w/workspace/transportpce-tox-verify-transportpce-master 11:42:24 + export PYTHONPATH 11:42:24 + export TOX_TESTENV_PASSENV=PYTHONPATH 11:42:24 + TOX_TESTENV_PASSENV=PYTHONPATH 11:42:24 + tox --version 11:42:24 4.47.3 from /tmp/venv-miFu/lib/python3.11/site-packages/tox/__init__.py 11:42:24 + PARALLEL=True 11:42:24 + TOX_OPTIONS_LIST= 11:42:24 + [[ -n '' ]] 11:42:24 + case ${PARALLEL,,} in 11:42:24 + TOX_OPTIONS_LIST=' --parallel auto --parallel-live' 11:42:24 + tox --parallel auto --parallel-live 11:42:24 + tee -a /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tox.log 11:42:25 docs-linkcheck: install_deps> python -I -m pip install -r docs/requirements.txt 11:42:25 checkbashisms: freeze> python -m pip freeze --all 11:42:25 docs: install_deps> python -I -m pip install -r docs/requirements.txt 11:42:25 buildcontroller: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 11:42:26 checkbashisms: pip==26.0.1,setuptools==82.0.0 11:42:26 checkbashisms: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./fixCIcentOS8reposMirrors.sh 11:42:26 checkbashisms: commands[1] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sh -c 'command checkbashisms>/dev/null || sudo yum install -y devscripts-checkbashisms || sudo yum install -y devscripts-minimal || sudo yum install -y devscripts || sudo yum install -y https://archives.fedoraproject.org/pub/archive/fedora/linux/releases/31/Everything/x86_64/os/Packages/d/devscripts-checkbashisms-2.19.6-2.fc31.x86_64.rpm || (echo "checkbashisms command not found - please install it (e.g. sudo apt-get install devscripts | yum install devscripts-minimal )" >&2 && exit 1)' 11:42:26 checkbashisms: commands[2] /w/workspace/transportpce-tox-verify-transportpce-master/tests> find . -not -path '*/\.*' -name '*.sh' -exec checkbashisms -f '{}' + 11:42:27 checkbashisms: OK ✔ in 3.14 seconds 11:42:27 pre-commit: install_deps> python -I -m pip install pre-commit 11:42:30 pre-commit: freeze> python -m pip freeze --all 11:42:30 pre-commit: cfgv==3.5.0,distlib==0.4.0,filelock==3.25.0,identify==2.6.17,nodeenv==1.10.0,pip==26.0.1,platformdirs==4.9.2,pre_commit==4.5.1,python-discovery==1.1.0,PyYAML==6.0.3,setuptools==82.0.0,virtualenv==21.1.0 11:42:30 pre-commit: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./fixCIcentOS8reposMirrors.sh 11:42:30 pre-commit: commands[1] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sh -c 'which cpan || sudo yum install -y perl-CPAN || (echo "cpan command not found - please install it (e.g. sudo apt-get install perl-modules | yum install perl-CPAN )" >&2 && exit 1)' 11:42:30 /usr/bin/cpan 11:42:30 pre-commit: commands[2] /w/workspace/transportpce-tox-verify-transportpce-master/tests> pre-commit run --all-files --show-diff-on-failure 11:42:30 [WARNING] hook id `remove-tabs` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 11:42:30 [WARNING] hook id `perltidy` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 11:42:30 [INFO] Initializing environment for https://github.com/pre-commit/pre-commit-hooks. 11:42:31 [WARNING] repo `https://github.com/pre-commit/pre-commit-hooks` uses deprecated stage names (commit, push) which will be removed in a future version. Hint: often `pre-commit autoupdate --repo https://github.com/pre-commit/pre-commit-hooks` will fix this. if it does not -- consider reporting an issue to that repo. 11:42:31 [INFO] Initializing environment for https://github.com/jorisroovers/gitlint. 11:42:31 [INFO] Initializing environment for https://github.com/jorisroovers/gitlint:./gitlint-core[trusted-deps]. 11:42:31 [INFO] Initializing environment for https://github.com/Lucas-C/pre-commit-hooks. 11:42:32 [INFO] Initializing environment for https://github.com/pre-commit/mirrors-autopep8. 11:42:32 buildcontroller: freeze> python -m pip freeze --all 11:42:32 [INFO] Initializing environment for https://github.com/perltidy/perltidy. 11:42:32 buildcontroller: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 11:42:32 buildcontroller: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_controller.sh 11:42:32 + update-java-alternatives -l 11:42:32 java-1.11.0-openjdk-amd64 1111 /usr/lib/jvm/java-1.11.0-openjdk-amd64 11:42:32 java-1.17.0-openjdk-amd64 1711 /usr/lib/jvm/java-1.17.0-openjdk-amd64 11:42:32 java-1.21.0-openjdk-amd64 2111 /usr/lib/jvm/java-1.21.0-openjdk-amd64 11:42:32 + sudo update-java-alternatives -s java-1.21.0-openjdk-amd64 11:42:32 update-alternatives: error: no alternatives for jaotc 11:42:32 update-alternatives: error: no alternatives for rmic 11:42:32 + java -version 11:42:32 + sed -n ;s/.* version "\(.*\)\.\(.*\)\..*".*$/\1/p; 11:42:32 + JAVA_VER=21 11:42:32 + echo 21 11:42:32 21 11:42:32 + javac -version 11:42:32 + sed -n ;s/javac \(.*\)\.\(.*\)\..*.*$/\1/p; 11:42:33 21 11:42:33 + JAVAC_VER=21 11:42:33 + echo 21 11:42:33 + [ 21 -ge 21 ] 11:42:33 + [ 21 -ge 21 ] 11:42:33 + echo ok, java is 21 or newer 11:42:33 + wget -nv https://dlcdn.apache.org/maven/maven-3/3.9.12/binaries/apache-maven-3.9.12-bin.tar.gz -P /tmp 11:42:33 ok, java is 21 or newer 11:42:33 [INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks. 11:42:33 [INFO] Once installed this environment will be reused. 11:42:33 [INFO] This may take a few minutes... 11:42:34 2026-03-05 11:42:34 URL:https://dlcdn.apache.org/maven/maven-3/3.9.12/binaries/apache-maven-3.9.12-bin.tar.gz [9233336/9233336] -> "/tmp/apache-maven-3.9.12-bin.tar.gz" [1] 11:42:34 + sudo mkdir -p /opt 11:42:34 + sudo tar xf /tmp/apache-maven-3.9.12-bin.tar.gz -C /opt 11:42:34 + sudo ln -s /opt/apache-maven-3.9.12 /opt/maven 11:42:34 + sudo ln -s /opt/maven/bin/mvn /usr/bin/mvn 11:42:34 + mvn --version 11:42:34 Apache Maven 3.9.12 (848fbb4bf2d427b72bdb2471c22fced7ebd9a7a1) 11:42:34 Maven home: /opt/maven 11:42:34 Java version: 21.0.10, vendor: Ubuntu, runtime: /usr/lib/jvm/java-21-openjdk-amd64 11:42:34 Default locale: en, platform encoding: UTF-8 11:42:34 OS name: "linux", version: "5.15.0-171-generic", arch: "amd64", family: "unix" 11:42:34 NOTE: Picked up JDK_JAVA_OPTIONS: 11:42:34 --add-opens=java.base/java.io=ALL-UNNAMED 11:42:34 --add-opens=java.base/java.lang=ALL-UNNAMED 11:42:34 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 11:42:34 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 11:42:34 --add-opens=java.base/java.net=ALL-UNNAMED 11:42:34 --add-opens=java.base/java.nio=ALL-UNNAMED 11:42:34 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 11:42:34 --add-opens=java.base/java.nio.file=ALL-UNNAMED 11:42:34 --add-opens=java.base/java.util=ALL-UNNAMED 11:42:34 --add-opens=java.base/java.util.jar=ALL-UNNAMED 11:42:34 --add-opens=java.base/java.util.stream=ALL-UNNAMED 11:42:34 --add-opens=java.base/java.util.zip=ALL-UNNAMED 11:42:34 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 11:42:34 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 11:42:34 -Xlog:disable 11:42:37 [INFO] Installing environment for https://github.com/Lucas-C/pre-commit-hooks. 11:42:37 [INFO] Once installed this environment will be reused. 11:42:37 [INFO] This may take a few minutes... 11:42:44 [INFO] Installing environment for https://github.com/pre-commit/mirrors-autopep8. 11:42:44 [INFO] Once installed this environment will be reused. 11:42:44 [INFO] This may take a few minutes... 11:42:49 [INFO] Installing environment for https://github.com/perltidy/perltidy. 11:42:49 [INFO] Once installed this environment will be reused. 11:42:49 [INFO] This may take a few minutes... 11:42:50 docs: freeze> python -m pip freeze --all 11:42:51 docs-linkcheck: freeze> python -m pip freeze --all 11:42:51 docs: alabaster==1.0.0,attrs==25.4.0,babel==2.18.0,blockdiag==3.0.0,certifi==2026.2.25,charset-normalizer==3.4.4,contourpy==1.3.3,cycler==0.12.1,docutils==0.21.2,fonttools==4.61.1,funcparserlib==2.0.0a0,future==1.0.0,idna==3.11,imagesize==2.0.0,Jinja2==3.1.6,jsonschema==3.2.0,kiwisolver==1.4.9,lfdocs_conf==0.10.0,MarkupSafe==3.0.3,matplotlib==3.10.8,numpy==2.4.2,nwdiag==3.0.0,packaging==26.0,pillow==12.1.1,pip==26.0.1,Pygments==2.19.2,pyparsing==3.3.2,pyrsistent==0.20.0,python-dateutil==2.9.0.post0,PyYAML==6.0.3,requests==2.32.5,requests-file==1.5.1,roman-numerals==4.1.0,roman-numerals-py==4.1.0,seqdiag==3.0.0,setuptools==82.0.0,six==1.17.0,snowballstemmer==3.0.1,Sphinx==8.2.3,sphinx-bootstrap-theme==0.8.1,sphinx-data-viewer==0.1.5,sphinx-tabs==3.5.0,sphinx_rtd_theme==3.1.0,sphinxcontrib-applehelp==2.0.0,sphinxcontrib-blockdiag==3.0.0,sphinxcontrib-devhelp==2.0.0,sphinxcontrib-htmlhelp==2.1.0,sphinxcontrib-jquery==4.1,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-needs==0.7.9,sphinxcontrib-nwdiag==2.0.0,sphinxcontrib-plantuml==0.31,sphinxcontrib-qthelp==2.0.0,sphinxcontrib-seqdiag==3.0.0,sphinxcontrib-serializinghtml==2.0.0,sphinxcontrib-swaggerdoc==0.1.7,urllib3==2.6.3,webcolors==25.10.0 11:42:51 docs: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sphinx-build -q -W --keep-going -b html -n -d /w/workspace/transportpce-tox-verify-transportpce-master/.tox/docs/tmp/doctrees ../docs/ /w/workspace/transportpce-tox-verify-transportpce-master/docs/_build/html 11:42:51 docs-linkcheck: alabaster==1.0.0,attrs==25.4.0,babel==2.18.0,blockdiag==3.0.0,certifi==2026.2.25,charset-normalizer==3.4.4,contourpy==1.3.3,cycler==0.12.1,docutils==0.21.2,fonttools==4.61.1,funcparserlib==2.0.0a0,future==1.0.0,idna==3.11,imagesize==2.0.0,Jinja2==3.1.6,jsonschema==3.2.0,kiwisolver==1.4.9,lfdocs_conf==0.10.0,MarkupSafe==3.0.3,matplotlib==3.10.8,numpy==2.4.2,nwdiag==3.0.0,packaging==26.0,pillow==12.1.1,pip==26.0.1,Pygments==2.19.2,pyparsing==3.3.2,pyrsistent==0.20.0,python-dateutil==2.9.0.post0,PyYAML==6.0.3,requests==2.32.5,requests-file==1.5.1,roman-numerals==4.1.0,roman-numerals-py==4.1.0,seqdiag==3.0.0,setuptools==82.0.0,six==1.17.0,snowballstemmer==3.0.1,Sphinx==8.2.3,sphinx-bootstrap-theme==0.8.1,sphinx-data-viewer==0.1.5,sphinx-tabs==3.5.0,sphinx_rtd_theme==3.1.0,sphinxcontrib-applehelp==2.0.0,sphinxcontrib-blockdiag==3.0.0,sphinxcontrib-devhelp==2.0.0,sphinxcontrib-htmlhelp==2.1.0,sphinxcontrib-jquery==4.1,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-needs==0.7.9,sphinxcontrib-nwdiag==2.0.0,sphinxcontrib-plantuml==0.31,sphinxcontrib-qthelp==2.0.0,sphinxcontrib-seqdiag==3.0.0,sphinxcontrib-serializinghtml==2.0.0,sphinxcontrib-swaggerdoc==0.1.7,urllib3==2.6.3,webcolors==25.10.0 11:42:51 docs-linkcheck: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sphinx-build -q -b linkcheck -d /w/workspace/transportpce-tox-verify-transportpce-master/.tox/docs-linkcheck/tmp/doctrees ../docs/ /w/workspace/transportpce-tox-verify-transportpce-master/docs/_build/linkcheck 11:42:54 docs: OK ✔ in 29.58 seconds 11:42:54 pylint: install_deps> python -I -m pip install 'pylint>=2.6.0' 11:42:57 docs-linkcheck: OK ✔ in 32.15 seconds 11:42:57 pylint: freeze> python -m pip freeze --all 11:42:58 pylint: astroid==4.0.4,dill==0.4.1,isort==8.0.1,mccabe==0.7.0,pip==26.0.1,platformdirs==4.9.2,pylint==4.0.5,setuptools==82.0.0,tomlkit==0.14.0 11:42:58 pylint: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> find transportpce_tests/ -name '*.py' -exec pylint --fail-under=10 --max-line-length=120 --disable=missing-docstring,import-error --disable=fixme --disable=duplicate-code '--module-rgx=([a-z0-9_]+$)|([0-9.]{1,30}$)' '--method-rgx=(([a-z_][a-zA-Z0-9_]{2,})|(_[a-z0-9_]*)|(__[a-zA-Z][a-zA-Z0-9_]+__))$' '--variable-rgx=[a-zA-Z_][a-zA-Z0-9_]{1,30}$' '{}' + 11:43:01 trim trailing whitespace.................................................Passed 11:43:02 Tabs remover.............................................................Passed 11:43:02 autopep8.................................................................Passed 11:43:08 perltidy.................................................................Passed 11:43:08 pre-commit: commands[3] /w/workspace/transportpce-tox-verify-transportpce-master/tests> pre-commit run gitlint-ci --hook-stage manual 11:43:09 [WARNING] hook id `remove-tabs` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 11:43:09 [WARNING] hook id `perltidy` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 11:43:09 [INFO] Installing environment for https://github.com/jorisroovers/gitlint. 11:43:09 [INFO] Once installed this environment will be reused. 11:43:09 [INFO] This may take a few minutes... 11:43:16 gitlint..................................................................Passed 11:43:24 11:43:24 ------------------------------------ 11:43:24 Your code has been rated at 10.00/10 11:43:24 11:44:12 pre-commit: OK ✔ in 49.3 seconds 11:44:12 pylint: OK ✔ in 31.56 seconds 11:44:12 buildcontroller: OK ✔ in 1 minute 46.6 seconds 11:44:12 build_karaf_tests71: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 11:44:12 build_karaf_tests121: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 11:44:12 build_karaf_tests221: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 11:44:12 build_karaf_tests200: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 11:44:18 build_karaf_tests200: freeze> python -m pip freeze --all 11:44:18 build_karaf_tests221: freeze> python -m pip freeze --all 11:44:18 build_karaf_tests121: freeze> python -m pip freeze --all 11:44:18 build_karaf_tests71: freeze> python -m pip freeze --all 11:44:18 build_karaf_tests200: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 11:44:18 build_karaf_tests200: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 11:44:18 build karaf in karafoc200 with ./karafoc200.env 11:44:18 build_karaf_tests221: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 11:44:18 build_karaf_tests221: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 11:44:18 build karaf in karaf221 with ./karaf221.env 11:44:18 build_karaf_tests121: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 11:44:18 build_karaf_tests121: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 11:44:18 build karaf in karaf121 with ./karaf121.env 11:44:18 build_karaf_tests71: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 11:44:18 build_karaf_tests71: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 11:44:18 build karaf in karaf71 with ./karaf71.env 11:44:18 NOTE: Picked up JDK_JAVA_OPTIONS: 11:44:18 --add-opens=java.base/java.io=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.lang=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.net=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.nio=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.nio.file=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.util=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.util.jar=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.util.stream=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.util.zip=ALL-UNNAMED 11:44:18 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 11:44:18 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 11:44:18 -Xlog:disable 11:44:18 NOTE: Picked up JDK_JAVA_OPTIONS: 11:44:18 --add-opens=java.base/java.io=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.lang=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.net=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.nio=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.nio.file=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.util=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.util.jar=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.util.stream=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.util.zip=ALL-UNNAMED 11:44:18 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 11:44:18 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 11:44:18 -Xlog:disable 11:44:18 NOTE: Picked up JDK_JAVA_OPTIONS: 11:44:18 --add-opens=java.base/java.io=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.lang=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.net=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.nio=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.nio.file=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.util=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.util.jar=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.util.stream=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.util.zip=ALL-UNNAMED 11:44:18 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 11:44:18 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 11:44:18 -Xlog:disable 11:44:18 NOTE: Picked up JDK_JAVA_OPTIONS: 11:44:18 --add-opens=java.base/java.io=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.lang=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.net=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.nio=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.nio.file=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.util=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.util.jar=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.util.stream=ALL-UNNAMED 11:44:18 --add-opens=java.base/java.util.zip=ALL-UNNAMED 11:44:18 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 11:44:18 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 11:44:18 -Xlog:disable 11:45:09 build_karaf_tests71: OK ✔ in 57.99 seconds 11:45:09 build_karaf_tests221: OK ✔ in 58 seconds 11:45:09 buildlighty: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 11:45:09 sims: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 11:45:10 build_karaf_tests200: OK ✔ in 58.72 seconds 11:45:10 build_karaf_tests121: OK ✔ in 58.74 seconds 11:45:10 testsPCE: install_deps> python -I -m pip install gnpy4tpce==2.4.7 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 11:45:16 sims: freeze> python -m pip freeze --all 11:45:16 buildlighty: freeze> python -m pip freeze --all 11:45:16 sims: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 11:45:16 sims: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./install_lightynode.sh 11:45:16 Using lighynode version 22.1.0.7 11:45:16 Installing lightynode device to ./lightynode/lightynode-openroadm-device directory 11:45:16 buildlighty: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 11:45:16 buildlighty: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/lighty> ./build.sh 11:45:16 NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED 11:45:19 sims: OK ✔ in 10.6 seconds 11:45:19 tests71: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 11:45:26 tests71: freeze> python -m pip freeze --all 11:45:26 tests71: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 11:45:26 tests71: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 7.1 11:45:26 using environment variables from ./karaf71.env 11:45:26 pytest -q transportpce_tests/7.1/test01_portmapping.py 11:46:09 buildlighty: OK ✔ in 43.22 seconds 11:46:09 testsPCE: freeze> python -m pip freeze --all 11:46:09 testsPCE: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,click==8.3.1,contourpy==1.3.3,cryptography==3.3.2,cycler==0.12.1,dict2xml==1.7.8,Flask==2.1.3,Flask-Injector==0.14.0,fonttools==4.61.1,gnpy4tpce==2.4.7,idna==3.11,iniconfig==2.3.0,injector==0.24.0,invoke==2.2.1,itsdangerous==2.2.0,Jinja2==3.1.6,kiwisolver==1.4.9,lxml==6.0.2,MarkupSafe==3.0.3,matplotlib==3.10.8,netconf-client==3.5.0,networkx==2.8.8,numpy==1.26.4,packaging==26.0,pandas==1.5.3,paramiko==4.0.0,pbr==5.11.1,pillow==12.1.1,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pyparsing==3.3.2,pytest==9.0.2,python-dateutil==2.9.0.post0,pytz==2026.1.post1,requests==2.32.5,scipy==1.17.1,setuptools==50.3.2,six==1.17.0,urllib3==2.6.3,Werkzeug==2.0.3,xlrd==1.2.0 11:46:09 testsPCE: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh pce 11:46:09 pytest -q transportpce_tests/pce/test01_pce.py 11:46:10 ............ [100%] 11:46:23 12 passed in 56.92s 11:46:23 pytest -q transportpce_tests/7.1/test02_otn_renderer.py 11:46:58 ...................................................... [100%] 11:48:09 20 passed in 119.34s (0:01:59) 11:48:09 pytest -q transportpce_tests/pce/test02_pce_400G.py 11:48:10 .................................. [100%] 11:48:55 12 passed in 45.66s 11:48:55 pytest -q transportpce_tests/pce/test03_gnpy.py 11:48:56 ...... [100%] 11:49:09 62 passed in 164.80s (0:02:44) 11:49:09 pytest -q transportpce_tests/7.1/test03_renderer_or_modes.py 11:49:11 ........ [100%] 11:49:33 8 passed in 38.13s 11:49:33 pytest -q transportpce_tests/pce/test04_pce_bug_fix.py 11:49:41 ................ [100%] 11:50:11 3 passed in 37.76s 11:50:11 .testsPCE: OK ✔ in 5 minutes 1.78 seconds 11:50:12 tests_tapi: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 11:50:12 tests121: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 11:50:12 tests200: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 11:50:13 ...tests_tapi: freeze> python -m pip freeze --all 11:50:18 tests121: freeze> python -m pip freeze --all 11:50:18 tests200: freeze> python -m pip freeze --all 11:50:18 tests_tapi: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 11:50:18 tests_tapi: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh tapi 11:50:18 using environment variables from ./karaf221.env 11:50:18 pytest -q transportpce_tests/tapi/test01_abstracted_topology.py 11:50:18 tests121: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 11:50:18 tests121: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 1.2.1 11:50:18 using environment variables from ./karaf121.env 11:50:18 pytest -q transportpce_tests/1.2.1/test01_portmapping.py 11:50:18 tests200: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 11:50:18 tests200: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh oc200 11:50:18 using environment variables from ./karafoc200.env 11:50:18 pytest -q transportpce_tests/oc200/test01_portmapping.py 11:50:20 ................................. [100%] 11:51:29 48 passed in 140.31s (0:02:20) 11:51:29 pytest -q transportpce_tests/7.1/test04_renderer_regen_mode.py 11:51:32 ........ [100%] 11:51:41 10 passed in 82.56s (0:01:22) 11:51:41 pytest -q transportpce_tests/oc200/test02_topology.py 11:52:16 ........................................ [100%] 11:53:03 14 passed in 81.52s (0:01:21) 11:53:03 pytest -q transportpce_tests/oc200/test03_renderer.py 11:53:04 [100%] 11:53:04 22 passed in 94.26s (0:01:34) 11:53:26 ................ [100%] 11:53:49 16 passed in 45.28s 11:54:25 .FFFFFFFFFFFFFFFFFFFF [100%] 11:54:50 =================================== FAILURES =================================== 11:54:50 ___________ TestTransportPCEPortmapping.test_02_rdm_device_connected ___________ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 > sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:54:50 raise err 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 address = ('localhost', 8191), timeout = 30, source_address = None 11:54:50 socket_options = [(6, 1, 1)] 11:54:50 11:54:50 def create_connection( 11:54:50 address: tuple[str, int], 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 source_address: tuple[str, int] | None = None, 11:54:50 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:54:50 ) -> socket.socket: 11:54:50 """Connect to *address* and return the socket object. 11:54:50 11:54:50 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:54:50 port)``) and return the socket object. Passing the optional 11:54:50 *timeout* parameter will set the timeout on the socket instance 11:54:50 before attempting to connect. If no *timeout* is supplied, the 11:54:50 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:54:50 is used. If *source_address* is set it must be a tuple of (host, port) 11:54:50 for the socket to bind as a source address before making the connection. 11:54:50 An host of '' or port 0 tells the OS to use the default. 11:54:50 """ 11:54:50 11:54:50 host, port = address 11:54:50 if host.startswith("["): 11:54:50 host = host.strip("[]") 11:54:50 err = None 11:54:50 11:54:50 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:54:50 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:54:50 # The original create_connection function always returns all records. 11:54:50 family = allowed_gai_family() 11:54:50 11:54:50 try: 11:54:50 host.encode("idna") 11:54:50 except UnicodeError: 11:54:50 raise LocationParseError(f"'{host}', label empty or too long") from None 11:54:50 11:54:50 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:54:50 af, socktype, proto, canonname, sa = res 11:54:50 sock = None 11:54:50 try: 11:54:50 sock = socket.socket(af, socktype, proto) 11:54:50 11:54:50 # If provided, set socket level options before connecting. 11:54:50 _set_socket_options(sock, socket_options) 11:54:50 11:54:50 if timeout is not _DEFAULT_TIMEOUT: 11:54:50 sock.settimeout(timeout) 11:54:50 if source_address: 11:54:50 sock.bind(source_address) 11:54:50 > sock.connect(sa) 11:54:50 E ConnectionRefusedError: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig' 11:54:50 body = None 11:54:50 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:54:50 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 redirect = False, assert_same_host = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:54:50 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:54:50 decode_content = False, response_kw = {} 11:54:50 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01', query='content=nonconfig', fragment=None) 11:54:50 destination_scheme = None, conn = None, release_this_conn = True 11:54:50 http_tunnel_required = False, err = None, clean_exit = False 11:54:50 11:54:50 def urlopen( # type: ignore[override] 11:54:50 self, 11:54:50 method: str, 11:54:50 url: str, 11:54:50 body: _TYPE_BODY | None = None, 11:54:50 headers: typing.Mapping[str, str] | None = None, 11:54:50 retries: Retry | bool | int | None = None, 11:54:50 redirect: bool = True, 11:54:50 assert_same_host: bool = True, 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 pool_timeout: int | None = None, 11:54:50 release_conn: bool | None = None, 11:54:50 chunked: bool = False, 11:54:50 body_pos: _TYPE_BODY_POSITION | None = None, 11:54:50 preload_content: bool = True, 11:54:50 decode_content: bool = True, 11:54:50 **response_kw: typing.Any, 11:54:50 ) -> BaseHTTPResponse: 11:54:50 """ 11:54:50 Get a connection from the pool and perform an HTTP request. This is the 11:54:50 lowest level call for making a request, so you'll need to specify all 11:54:50 the raw details. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 More commonly, it's appropriate to use a convenience method 11:54:50 such as :meth:`request`. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 `release_conn` will only behave as expected if 11:54:50 `preload_content=False` because we want to make 11:54:50 `preload_content=False` the default behaviour someday soon without 11:54:50 breaking backwards compatibility. 11:54:50 11:54:50 :param method: 11:54:50 HTTP request method (such as GET, POST, PUT, etc.) 11:54:50 11:54:50 :param url: 11:54:50 The URL to perform the request on. 11:54:50 11:54:50 :param body: 11:54:50 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:54:50 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:54:50 11:54:50 :param headers: 11:54:50 Dictionary of custom headers to send, such as User-Agent, 11:54:50 If-None-Match, etc. If None, pool headers are used. If provided, 11:54:50 these headers completely replace any pool-specific headers. 11:54:50 11:54:50 :param retries: 11:54:50 Configure the number of retries to allow before raising a 11:54:50 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:54:50 11:54:50 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:54:50 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:54:50 over different types of retries. 11:54:50 Pass an integer number to retry connection errors that many times, 11:54:50 but no other types of errors. Pass zero to never retry. 11:54:50 11:54:50 If ``False``, then retries are disabled and any exception is raised 11:54:50 immediately. Also, instead of raising a MaxRetryError on redirects, 11:54:50 the redirect response will be returned. 11:54:50 11:54:50 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:54:50 11:54:50 :param redirect: 11:54:50 If True, automatically handle redirects (status codes 301, 302, 11:54:50 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:54:50 will disable redirect, too. 11:54:50 11:54:50 :param assert_same_host: 11:54:50 If ``True``, will make sure that the host of the pool requests is 11:54:50 consistent else will raise HostChangedError. When ``False``, you can 11:54:50 use the pool on an HTTP proxy and request foreign hosts. 11:54:50 11:54:50 :param timeout: 11:54:50 If specified, overrides the default timeout for this one 11:54:50 request. It may be a float (in seconds) or an instance of 11:54:50 :class:`urllib3.util.Timeout`. 11:54:50 11:54:50 :param pool_timeout: 11:54:50 If set and the pool is set to block=True, then this method will 11:54:50 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:54:50 connection is available within the time period. 11:54:50 11:54:50 :param bool preload_content: 11:54:50 If True, the response's body will be preloaded into memory. 11:54:50 11:54:50 :param bool decode_content: 11:54:50 If True, will attempt to decode the body based on the 11:54:50 'content-encoding' header. 11:54:50 11:54:50 :param release_conn: 11:54:50 If False, then the urlopen call will not release the connection 11:54:50 back into the pool once a response is received (but will release if 11:54:50 you read the entire contents of the response such as when 11:54:50 `preload_content=True`). This is useful if you're not preloading 11:54:50 the response's content immediately. You will need to call 11:54:50 ``r.release_conn()`` on the response ``r`` to return the connection 11:54:50 back into the pool. If None, it takes the value of ``preload_content`` 11:54:50 which defaults to ``True``. 11:54:50 11:54:50 :param bool chunked: 11:54:50 If True, urllib3 will send the body using chunked transfer 11:54:50 encoding. Otherwise, urllib3 will send the body using the standard 11:54:50 content-length form. Defaults to False. 11:54:50 11:54:50 :param int body_pos: 11:54:50 Position to seek to in file-like body in the event of a retry or 11:54:50 redirect. Typically this won't need to be set because urllib3 will 11:54:50 auto-populate the value when needed. 11:54:50 """ 11:54:50 parsed_url = parse_url(url) 11:54:50 destination_scheme = parsed_url.scheme 11:54:50 11:54:50 if headers is None: 11:54:50 headers = self.headers 11:54:50 11:54:50 if not isinstance(retries, Retry): 11:54:50 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:54:50 11:54:50 if release_conn is None: 11:54:50 release_conn = preload_content 11:54:50 11:54:50 # Check host 11:54:50 if assert_same_host and not self.is_same_host(url): 11:54:50 raise HostChangedError(self, url, retries) 11:54:50 11:54:50 # Ensure that the URL we're connecting to is properly encoded 11:54:50 if url.startswith("/"): 11:54:50 url = to_str(_encode_target(url)) 11:54:50 else: 11:54:50 url = to_str(parsed_url.url) 11:54:50 11:54:50 conn = None 11:54:50 11:54:50 # Track whether `conn` needs to be released before 11:54:50 # returning/raising/recursing. Update this variable if necessary, and 11:54:50 # leave `release_conn` constant throughout the function. That way, if 11:54:50 # the function recurses, the original value of `release_conn` will be 11:54:50 # passed down into the recursive call, and its value will be respected. 11:54:50 # 11:54:50 # See issue #651 [1] for details. 11:54:50 # 11:54:50 # [1] 11:54:50 release_this_conn = release_conn 11:54:50 11:54:50 http_tunnel_required = connection_requires_http_tunnel( 11:54:50 self.proxy, self.proxy_config, destination_scheme 11:54:50 ) 11:54:50 11:54:50 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:54:50 # have to copy the headers dict so we can safely change it without those 11:54:50 # changes being reflected in anyone else's copy. 11:54:50 if not http_tunnel_required: 11:54:50 headers = headers.copy() # type: ignore[attr-defined] 11:54:50 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:54:50 11:54:50 # Must keep the exception bound to a separate variable or else Python 3 11:54:50 # complains about UnboundLocalError. 11:54:50 err = None 11:54:50 11:54:50 # Keep track of whether we cleanly exited the except block. This 11:54:50 # ensures we do proper cleanup in finally. 11:54:50 clean_exit = False 11:54:50 11:54:50 # Rewind body position, if needed. Record current position 11:54:50 # for future rewinds in the event of a redirect/retry. 11:54:50 body_pos = set_file_position(body, body_pos) 11:54:50 11:54:50 try: 11:54:50 # Request a connection from the queue. 11:54:50 timeout_obj = self._get_timeout(timeout) 11:54:50 conn = self._get_conn(timeout=pool_timeout) 11:54:50 11:54:50 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:54:50 11:54:50 # Is this a closed/new connection that requires CONNECT tunnelling? 11:54:50 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:54:50 try: 11:54:50 self._prepare_proxy(conn) 11:54:50 except (BaseSSLError, OSError, SocketTimeout) as e: 11:54:50 self._raise_timeout( 11:54:50 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:54:50 ) 11:54:50 raise 11:54:50 11:54:50 # If we're going to release the connection in ``finally:``, then 11:54:50 # the response doesn't need to know about the connection. Otherwise 11:54:50 # it will also try to release it and we'll have a double-release 11:54:50 # mess. 11:54:50 response_conn = conn if not release_conn else None 11:54:50 11:54:50 # Make the request on the HTTPConnection object 11:54:50 > response = self._make_request( 11:54:50 conn, 11:54:50 method, 11:54:50 url, 11:54:50 timeout=timeout_obj, 11:54:50 body=body, 11:54:50 headers=headers, 11:54:50 chunked=chunked, 11:54:50 retries=retries, 11:54:50 response_conn=response_conn, 11:54:50 preload_content=preload_content, 11:54:50 decode_content=decode_content, 11:54:50 **response_kw, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:54:50 conn.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 11:54:50 self.endheaders() 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:54:50 self._send_output(message_body, encode_chunked=encode_chunked) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:54:50 self.send(msg) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:54:50 self.connect() 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 11:54:50 self.sock = self._new_conn() 11:54:50 ^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 except socket.gaierror as e: 11:54:50 raise NameResolutionError(self.host, self, e) from e 11:54:50 except SocketTimeout as e: 11:54:50 raise ConnectTimeoutError( 11:54:50 self, 11:54:50 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:54:50 ) from e 11:54:50 11:54:50 except OSError as e: 11:54:50 > raise NewConnectionError( 11:54:50 self, f"Failed to establish a new connection: {e}" 11:54:50 ) from e 11:54:50 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 > resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:54:50 retries = retries.increment( 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig' 11:54:50 response = None 11:54:50 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 11:54:50 _pool = 11:54:50 _stacktrace = 11:54:50 11:54:50 def increment( 11:54:50 self, 11:54:50 method: str | None = None, 11:54:50 url: str | None = None, 11:54:50 response: BaseHTTPResponse | None = None, 11:54:50 error: Exception | None = None, 11:54:50 _pool: ConnectionPool | None = None, 11:54:50 _stacktrace: TracebackType | None = None, 11:54:50 ) -> Self: 11:54:50 """Return a new Retry object with incremented retry counters. 11:54:50 11:54:50 :param response: A response object, or None, if the server did not 11:54:50 return a response. 11:54:50 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:54:50 :param Exception error: An error encountered during the request, or 11:54:50 None if the response was received successfully. 11:54:50 11:54:50 :return: A new ``Retry`` object. 11:54:50 """ 11:54:50 if self.total is False and error: 11:54:50 # Disabled, indicate to re-raise the error. 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 11:54:50 total = self.total 11:54:50 if total is not None: 11:54:50 total -= 1 11:54:50 11:54:50 connect = self.connect 11:54:50 read = self.read 11:54:50 redirect = self.redirect 11:54:50 status_count = self.status 11:54:50 other = self.other 11:54:50 cause = "unknown" 11:54:50 status = None 11:54:50 redirect_location = None 11:54:50 11:54:50 if error and self._is_connection_error(error): 11:54:50 # Connect retry? 11:54:50 if connect is False: 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif connect is not None: 11:54:50 connect -= 1 11:54:50 11:54:50 elif error and self._is_read_error(error): 11:54:50 # Read retry? 11:54:50 if read is False or method is None or not self._is_method_retryable(method): 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif read is not None: 11:54:50 read -= 1 11:54:50 11:54:50 elif error: 11:54:50 # Other retry? 11:54:50 if other is not None: 11:54:50 other -= 1 11:54:50 11:54:50 elif response and response.get_redirect_location(): 11:54:50 # Redirect retry? 11:54:50 if redirect is not None: 11:54:50 redirect -= 1 11:54:50 cause = "too many redirects" 11:54:50 response_redirect_location = response.get_redirect_location() 11:54:50 if response_redirect_location: 11:54:50 redirect_location = response_redirect_location 11:54:50 status = response.status 11:54:50 11:54:50 else: 11:54:50 # Incrementing because of a server error like a 500 in 11:54:50 # status_forcelist and the given method is in the allowed_methods 11:54:50 cause = ResponseError.GENERIC_ERROR 11:54:50 if response and response.status: 11:54:50 if status_count is not None: 11:54:50 status_count -= 1 11:54:50 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:54:50 status = response.status 11:54:50 11:54:50 history = self.history + ( 11:54:50 RequestHistory(method, url, error, status, redirect_location), 11:54:50 ) 11:54:50 11:54:50 new_retry = self.new( 11:54:50 total=total, 11:54:50 connect=connect, 11:54:50 read=read, 11:54:50 redirect=redirect, 11:54:50 status=status_count, 11:54:50 other=other, 11:54:50 history=history, 11:54:50 ) 11:54:50 11:54:50 if new_retry.is_exhausted(): 11:54:50 reason = error or ResponseError(cause) 11:54:50 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 11:54:50 11:54:50 During handling of the above exception, another exception occurred: 11:54:50 11:54:50 self = 11:54:50 11:54:50 def test_02_rdm_device_connected(self): 11:54:50 > response = test_utils.check_device_connection("ROADMA01") 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 11:54:50 transportpce_tests/1.2.1/test01_portmapping.py:54: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 transportpce_tests/common/test_utils.py:409: in check_device_connection 11:54:50 response = get_request(url[RESTCONF_VERSION].format('{}', node)) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 transportpce_tests/common/test_utils.py:117: in get_request 11:54:50 return requests.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 11:54:50 return session.request(method=method, url=url, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:54:50 resp = self.send(prep, **send_kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:54:50 r = adapter.send(request, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 except (ProtocolError, OSError) as err: 11:54:50 raise ConnectionError(err, request=request) 11:54:50 11:54:50 except MaxRetryError as e: 11:54:50 if isinstance(e.reason, ConnectTimeoutError): 11:54:50 # TODO: Remove this in 3.0.0: see #2811 11:54:50 if not isinstance(e.reason, NewConnectionError): 11:54:50 raise ConnectTimeout(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, ResponseError): 11:54:50 raise RetryError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _ProxyError): 11:54:50 raise ProxyError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _SSLError): 11:54:50 # This branch is for urllib3 v1.22 and later. 11:54:50 raise SSLError(e, request=request) 11:54:50 11:54:50 > raise ConnectionError(e, request=request) 11:54:50 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 11:54:50 ----------------------------- Captured stdout call ----------------------------- 11:54:50 execution of test_02_rdm_device_connected 11:54:50 ___________ TestTransportPCEPortmapping.test_03_rdm_portmapping_info ___________ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 > sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:54:50 raise err 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 address = ('localhost', 8191), timeout = 30, source_address = None 11:54:50 socket_options = [(6, 1, 1)] 11:54:50 11:54:50 def create_connection( 11:54:50 address: tuple[str, int], 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 source_address: tuple[str, int] | None = None, 11:54:50 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:54:50 ) -> socket.socket: 11:54:50 """Connect to *address* and return the socket object. 11:54:50 11:54:50 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:54:50 port)``) and return the socket object. Passing the optional 11:54:50 *timeout* parameter will set the timeout on the socket instance 11:54:50 before attempting to connect. If no *timeout* is supplied, the 11:54:50 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:54:50 is used. If *source_address* is set it must be a tuple of (host, port) 11:54:50 for the socket to bind as a source address before making the connection. 11:54:50 An host of '' or port 0 tells the OS to use the default. 11:54:50 """ 11:54:50 11:54:50 host, port = address 11:54:50 if host.startswith("["): 11:54:50 host = host.strip("[]") 11:54:50 err = None 11:54:50 11:54:50 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:54:50 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:54:50 # The original create_connection function always returns all records. 11:54:50 family = allowed_gai_family() 11:54:50 11:54:50 try: 11:54:50 host.encode("idna") 11:54:50 except UnicodeError: 11:54:50 raise LocationParseError(f"'{host}', label empty or too long") from None 11:54:50 11:54:50 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:54:50 af, socktype, proto, canonname, sa = res 11:54:50 sock = None 11:54:50 try: 11:54:50 sock = socket.socket(af, socktype, proto) 11:54:50 11:54:50 # If provided, set socket level options before connecting. 11:54:50 _set_socket_options(sock, socket_options) 11:54:50 11:54:50 if timeout is not _DEFAULT_TIMEOUT: 11:54:50 sock.settimeout(timeout) 11:54:50 if source_address: 11:54:50 sock.bind(source_address) 11:54:50 > sock.connect(sa) 11:54:50 E ConnectionRefusedError: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info' 11:54:50 body = None 11:54:50 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:54:50 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 redirect = False, assert_same_host = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:54:50 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:54:50 decode_content = False, response_kw = {} 11:54:50 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info', query=None, fragment=None) 11:54:50 destination_scheme = None, conn = None, release_this_conn = True 11:54:50 http_tunnel_required = False, err = None, clean_exit = False 11:54:50 11:54:50 def urlopen( # type: ignore[override] 11:54:50 self, 11:54:50 method: str, 11:54:50 url: str, 11:54:50 body: _TYPE_BODY | None = None, 11:54:50 headers: typing.Mapping[str, str] | None = None, 11:54:50 retries: Retry | bool | int | None = None, 11:54:50 redirect: bool = True, 11:54:50 assert_same_host: bool = True, 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 pool_timeout: int | None = None, 11:54:50 release_conn: bool | None = None, 11:54:50 chunked: bool = False, 11:54:50 body_pos: _TYPE_BODY_POSITION | None = None, 11:54:50 preload_content: bool = True, 11:54:50 decode_content: bool = True, 11:54:50 **response_kw: typing.Any, 11:54:50 ) -> BaseHTTPResponse: 11:54:50 """ 11:54:50 Get a connection from the pool and perform an HTTP request. This is the 11:54:50 lowest level call for making a request, so you'll need to specify all 11:54:50 the raw details. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 More commonly, it's appropriate to use a convenience method 11:54:50 such as :meth:`request`. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 `release_conn` will only behave as expected if 11:54:50 `preload_content=False` because we want to make 11:54:50 `preload_content=False` the default behaviour someday soon without 11:54:50 breaking backwards compatibility. 11:54:50 11:54:50 :param method: 11:54:50 HTTP request method (such as GET, POST, PUT, etc.) 11:54:50 11:54:50 :param url: 11:54:50 The URL to perform the request on. 11:54:50 11:54:50 :param body: 11:54:50 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:54:50 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:54:50 11:54:50 :param headers: 11:54:50 Dictionary of custom headers to send, such as User-Agent, 11:54:50 If-None-Match, etc. If None, pool headers are used. If provided, 11:54:50 these headers completely replace any pool-specific headers. 11:54:50 11:54:50 :param retries: 11:54:50 Configure the number of retries to allow before raising a 11:54:50 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:54:50 11:54:50 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:54:50 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:54:50 over different types of retries. 11:54:50 Pass an integer number to retry connection errors that many times, 11:54:50 but no other types of errors. Pass zero to never retry. 11:54:50 11:54:50 If ``False``, then retries are disabled and any exception is raised 11:54:50 immediately. Also, instead of raising a MaxRetryError on redirects, 11:54:50 the redirect response will be returned. 11:54:50 11:54:50 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:54:50 11:54:50 :param redirect: 11:54:50 If True, automatically handle redirects (status codes 301, 302, 11:54:50 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:54:50 will disable redirect, too. 11:54:50 11:54:50 :param assert_same_host: 11:54:50 If ``True``, will make sure that the host of the pool requests is 11:54:50 consistent else will raise HostChangedError. When ``False``, you can 11:54:50 use the pool on an HTTP proxy and request foreign hosts. 11:54:50 11:54:50 :param timeout: 11:54:50 If specified, overrides the default timeout for this one 11:54:50 request. It may be a float (in seconds) or an instance of 11:54:50 :class:`urllib3.util.Timeout`. 11:54:50 11:54:50 :param pool_timeout: 11:54:50 If set and the pool is set to block=True, then this method will 11:54:50 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:54:50 connection is available within the time period. 11:54:50 11:54:50 :param bool preload_content: 11:54:50 If True, the response's body will be preloaded into memory. 11:54:50 11:54:50 :param bool decode_content: 11:54:50 If True, will attempt to decode the body based on the 11:54:50 'content-encoding' header. 11:54:50 11:54:50 :param release_conn: 11:54:50 If False, then the urlopen call will not release the connection 11:54:50 back into the pool once a response is received (but will release if 11:54:50 you read the entire contents of the response such as when 11:54:50 `preload_content=True`). This is useful if you're not preloading 11:54:50 the response's content immediately. You will need to call 11:54:50 ``r.release_conn()`` on the response ``r`` to return the connection 11:54:50 back into the pool. If None, it takes the value of ``preload_content`` 11:54:50 which defaults to ``True``. 11:54:50 11:54:50 :param bool chunked: 11:54:50 If True, urllib3 will send the body using chunked transfer 11:54:50 encoding. Otherwise, urllib3 will send the body using the standard 11:54:50 content-length form. Defaults to False. 11:54:50 11:54:50 :param int body_pos: 11:54:50 Position to seek to in file-like body in the event of a retry or 11:54:50 redirect. Typically this won't need to be set because urllib3 will 11:54:50 auto-populate the value when needed. 11:54:50 """ 11:54:50 parsed_url = parse_url(url) 11:54:50 destination_scheme = parsed_url.scheme 11:54:50 11:54:50 if headers is None: 11:54:50 headers = self.headers 11:54:50 11:54:50 if not isinstance(retries, Retry): 11:54:50 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:54:50 11:54:50 if release_conn is None: 11:54:50 release_conn = preload_content 11:54:50 11:54:50 # Check host 11:54:50 if assert_same_host and not self.is_same_host(url): 11:54:50 raise HostChangedError(self, url, retries) 11:54:50 11:54:50 # Ensure that the URL we're connecting to is properly encoded 11:54:50 if url.startswith("/"): 11:54:50 url = to_str(_encode_target(url)) 11:54:50 else: 11:54:50 url = to_str(parsed_url.url) 11:54:50 11:54:50 conn = None 11:54:50 11:54:50 # Track whether `conn` needs to be released before 11:54:50 # returning/raising/recursing. Update this variable if necessary, and 11:54:50 # leave `release_conn` constant throughout the function. That way, if 11:54:50 # the function recurses, the original value of `release_conn` will be 11:54:50 # passed down into the recursive call, and its value will be respected. 11:54:50 # 11:54:50 # See issue #651 [1] for details. 11:54:50 # 11:54:50 # [1] 11:54:50 release_this_conn = release_conn 11:54:50 11:54:50 http_tunnel_required = connection_requires_http_tunnel( 11:54:50 self.proxy, self.proxy_config, destination_scheme 11:54:50 ) 11:54:50 11:54:50 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:54:50 # have to copy the headers dict so we can safely change it without those 11:54:50 # changes being reflected in anyone else's copy. 11:54:50 if not http_tunnel_required: 11:54:50 headers = headers.copy() # type: ignore[attr-defined] 11:54:50 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:54:50 11:54:50 # Must keep the exception bound to a separate variable or else Python 3 11:54:50 # complains about UnboundLocalError. 11:54:50 err = None 11:54:50 11:54:50 # Keep track of whether we cleanly exited the except block. This 11:54:50 # ensures we do proper cleanup in finally. 11:54:50 clean_exit = False 11:54:50 11:54:50 # Rewind body position, if needed. Record current position 11:54:50 # for future rewinds in the event of a redirect/retry. 11:54:50 body_pos = set_file_position(body, body_pos) 11:54:50 11:54:50 try: 11:54:50 # Request a connection from the queue. 11:54:50 timeout_obj = self._get_timeout(timeout) 11:54:50 conn = self._get_conn(timeout=pool_timeout) 11:54:50 11:54:50 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:54:50 11:54:50 # Is this a closed/new connection that requires CONNECT tunnelling? 11:54:50 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:54:50 try: 11:54:50 self._prepare_proxy(conn) 11:54:50 except (BaseSSLError, OSError, SocketTimeout) as e: 11:54:50 self._raise_timeout( 11:54:50 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:54:50 ) 11:54:50 raise 11:54:50 11:54:50 # If we're going to release the connection in ``finally:``, then 11:54:50 # the response doesn't need to know about the connection. Otherwise 11:54:50 # it will also try to release it and we'll have a double-release 11:54:50 # mess. 11:54:50 response_conn = conn if not release_conn else None 11:54:50 11:54:50 # Make the request on the HTTPConnection object 11:54:50 > response = self._make_request( 11:54:50 conn, 11:54:50 method, 11:54:50 url, 11:54:50 timeout=timeout_obj, 11:54:50 body=body, 11:54:50 headers=headers, 11:54:50 chunked=chunked, 11:54:50 retries=retries, 11:54:50 response_conn=response_conn, 11:54:50 preload_content=preload_content, 11:54:50 decode_content=decode_content, 11:54:50 **response_kw, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:54:50 conn.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 11:54:50 self.endheaders() 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:54:50 self._send_output(message_body, encode_chunked=encode_chunked) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:54:50 self.send(msg) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:54:50 self.connect() 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 11:54:50 self.sock = self._new_conn() 11:54:50 ^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 except socket.gaierror as e: 11:54:50 raise NameResolutionError(self.host, self, e) from e 11:54:50 except SocketTimeout as e: 11:54:50 raise ConnectTimeoutError( 11:54:50 self, 11:54:50 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:54:50 ) from e 11:54:50 11:54:50 except OSError as e: 11:54:50 > raise NewConnectionError( 11:54:50 self, f"Failed to establish a new connection: {e}" 11:54:50 ) from e 11:54:50 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 > resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:54:50 retries = retries.increment( 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info' 11:54:50 response = None 11:54:50 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 11:54:50 _pool = 11:54:50 _stacktrace = 11:54:50 11:54:50 def increment( 11:54:50 self, 11:54:50 method: str | None = None, 11:54:50 url: str | None = None, 11:54:50 response: BaseHTTPResponse | None = None, 11:54:50 error: Exception | None = None, 11:54:50 _pool: ConnectionPool | None = None, 11:54:50 _stacktrace: TracebackType | None = None, 11:54:50 ) -> Self: 11:54:50 """Return a new Retry object with incremented retry counters. 11:54:50 11:54:50 :param response: A response object, or None, if the server did not 11:54:50 return a response. 11:54:50 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:54:50 :param Exception error: An error encountered during the request, or 11:54:50 None if the response was received successfully. 11:54:50 11:54:50 :return: A new ``Retry`` object. 11:54:50 """ 11:54:50 if self.total is False and error: 11:54:50 # Disabled, indicate to re-raise the error. 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 11:54:50 total = self.total 11:54:50 if total is not None: 11:54:50 total -= 1 11:54:50 11:54:50 connect = self.connect 11:54:50 read = self.read 11:54:50 redirect = self.redirect 11:54:50 status_count = self.status 11:54:50 other = self.other 11:54:50 cause = "unknown" 11:54:50 status = None 11:54:50 redirect_location = None 11:54:50 11:54:50 if error and self._is_connection_error(error): 11:54:50 # Connect retry? 11:54:50 if connect is False: 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif connect is not None: 11:54:50 connect -= 1 11:54:50 11:54:50 elif error and self._is_read_error(error): 11:54:50 # Read retry? 11:54:50 if read is False or method is None or not self._is_method_retryable(method): 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif read is not None: 11:54:50 read -= 1 11:54:50 11:54:50 elif error: 11:54:50 # Other retry? 11:54:50 if other is not None: 11:54:50 other -= 1 11:54:50 11:54:50 elif response and response.get_redirect_location(): 11:54:50 # Redirect retry? 11:54:50 if redirect is not None: 11:54:50 redirect -= 1 11:54:50 cause = "too many redirects" 11:54:50 response_redirect_location = response.get_redirect_location() 11:54:50 if response_redirect_location: 11:54:50 redirect_location = response_redirect_location 11:54:50 status = response.status 11:54:50 11:54:50 else: 11:54:50 # Incrementing because of a server error like a 500 in 11:54:50 # status_forcelist and the given method is in the allowed_methods 11:54:50 cause = ResponseError.GENERIC_ERROR 11:54:50 if response and response.status: 11:54:50 if status_count is not None: 11:54:50 status_count -= 1 11:54:50 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:54:50 status = response.status 11:54:50 11:54:50 history = self.history + ( 11:54:50 RequestHistory(method, url, error, status, redirect_location), 11:54:50 ) 11:54:50 11:54:50 new_retry = self.new( 11:54:50 total=total, 11:54:50 connect=connect, 11:54:50 read=read, 11:54:50 redirect=redirect, 11:54:50 status=status_count, 11:54:50 other=other, 11:54:50 history=history, 11:54:50 ) 11:54:50 11:54:50 if new_retry.is_exhausted(): 11:54:50 reason = error or ResponseError(cause) 11:54:50 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 11:54:50 11:54:50 During handling of the above exception, another exception occurred: 11:54:50 11:54:50 self = 11:54:50 11:54:50 def test_03_rdm_portmapping_info(self): 11:54:50 > response = test_utils.get_portmapping_node_attr("ROADMA01", "node-info", None) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 11:54:50 transportpce_tests/1.2.1/test01_portmapping.py:60: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 11:54:50 response = get_request(target_url) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 transportpce_tests/common/test_utils.py:117: in get_request 11:54:50 return requests.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 11:54:50 return session.request(method=method, url=url, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:54:50 resp = self.send(prep, **send_kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:54:50 r = adapter.send(request, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 except (ProtocolError, OSError) as err: 11:54:50 raise ConnectionError(err, request=request) 11:54:50 11:54:50 except MaxRetryError as e: 11:54:50 if isinstance(e.reason, ConnectTimeoutError): 11:54:50 # TODO: Remove this in 3.0.0: see #2811 11:54:50 if not isinstance(e.reason, NewConnectionError): 11:54:50 raise ConnectTimeout(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, ResponseError): 11:54:50 raise RetryError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _ProxyError): 11:54:50 raise ProxyError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _SSLError): 11:54:50 # This branch is for urllib3 v1.22 and later. 11:54:50 raise SSLError(e, request=request) 11:54:50 11:54:50 > raise ConnectionError(e, request=request) 11:54:50 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 11:54:50 ----------------------------- Captured stdout call ----------------------------- 11:54:50 execution of test_03_rdm_portmapping_info 11:54:50 ______ TestTransportPCEPortmapping.test_04_rdm_portmapping_DEG1_TTP_TXRX _______ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 > sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:54:50 raise err 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 address = ('localhost', 8191), timeout = 30, source_address = None 11:54:50 socket_options = [(6, 1, 1)] 11:54:50 11:54:50 def create_connection( 11:54:50 address: tuple[str, int], 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 source_address: tuple[str, int] | None = None, 11:54:50 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:54:50 ) -> socket.socket: 11:54:50 """Connect to *address* and return the socket object. 11:54:50 11:54:50 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:54:50 port)``) and return the socket object. Passing the optional 11:54:50 *timeout* parameter will set the timeout on the socket instance 11:54:50 before attempting to connect. If no *timeout* is supplied, the 11:54:50 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:54:50 is used. If *source_address* is set it must be a tuple of (host, port) 11:54:50 for the socket to bind as a source address before making the connection. 11:54:50 An host of '' or port 0 tells the OS to use the default. 11:54:50 """ 11:54:50 11:54:50 host, port = address 11:54:50 if host.startswith("["): 11:54:50 host = host.strip("[]") 11:54:50 err = None 11:54:50 11:54:50 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:54:50 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:54:50 # The original create_connection function always returns all records. 11:54:50 family = allowed_gai_family() 11:54:50 11:54:50 try: 11:54:50 host.encode("idna") 11:54:50 except UnicodeError: 11:54:50 raise LocationParseError(f"'{host}', label empty or too long") from None 11:54:50 11:54:50 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:54:50 af, socktype, proto, canonname, sa = res 11:54:50 sock = None 11:54:50 try: 11:54:50 sock = socket.socket(af, socktype, proto) 11:54:50 11:54:50 # If provided, set socket level options before connecting. 11:54:50 _set_socket_options(sock, socket_options) 11:54:50 11:54:50 if timeout is not _DEFAULT_TIMEOUT: 11:54:50 sock.settimeout(timeout) 11:54:50 if source_address: 11:54:50 sock.bind(source_address) 11:54:50 > sock.connect(sa) 11:54:50 E ConnectionRefusedError: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=DEG1-TTP-TXRX' 11:54:50 body = None 11:54:50 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:54:50 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 redirect = False, assert_same_host = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:54:50 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:54:50 decode_content = False, response_kw = {} 11:54:50 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=DEG1-TTP-TXRX', query=None, fragment=None) 11:54:50 destination_scheme = None, conn = None, release_this_conn = True 11:54:50 http_tunnel_required = False, err = None, clean_exit = False 11:54:50 11:54:50 def urlopen( # type: ignore[override] 11:54:50 self, 11:54:50 method: str, 11:54:50 url: str, 11:54:50 body: _TYPE_BODY | None = None, 11:54:50 headers: typing.Mapping[str, str] | None = None, 11:54:50 retries: Retry | bool | int | None = None, 11:54:50 redirect: bool = True, 11:54:50 assert_same_host: bool = True, 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 pool_timeout: int | None = None, 11:54:50 release_conn: bool | None = None, 11:54:50 chunked: bool = False, 11:54:50 body_pos: _TYPE_BODY_POSITION | None = None, 11:54:50 preload_content: bool = True, 11:54:50 decode_content: bool = True, 11:54:50 **response_kw: typing.Any, 11:54:50 ) -> BaseHTTPResponse: 11:54:50 """ 11:54:50 Get a connection from the pool and perform an HTTP request. This is the 11:54:50 lowest level call for making a request, so you'll need to specify all 11:54:50 the raw details. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 More commonly, it's appropriate to use a convenience method 11:54:50 such as :meth:`request`. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 `release_conn` will only behave as expected if 11:54:50 `preload_content=False` because we want to make 11:54:50 `preload_content=False` the default behaviour someday soon without 11:54:50 breaking backwards compatibility. 11:54:50 11:54:50 :param method: 11:54:50 HTTP request method (such as GET, POST, PUT, etc.) 11:54:50 11:54:50 :param url: 11:54:50 The URL to perform the request on. 11:54:50 11:54:50 :param body: 11:54:50 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:54:50 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:54:50 11:54:50 :param headers: 11:54:50 Dictionary of custom headers to send, such as User-Agent, 11:54:50 If-None-Match, etc. If None, pool headers are used. If provided, 11:54:50 these headers completely replace any pool-specific headers. 11:54:50 11:54:50 :param retries: 11:54:50 Configure the number of retries to allow before raising a 11:54:50 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:54:50 11:54:50 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:54:50 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:54:50 over different types of retries. 11:54:50 Pass an integer number to retry connection errors that many times, 11:54:50 but no other types of errors. Pass zero to never retry. 11:54:50 11:54:50 If ``False``, then retries are disabled and any exception is raised 11:54:50 immediately. Also, instead of raising a MaxRetryError on redirects, 11:54:50 the redirect response will be returned. 11:54:50 11:54:50 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:54:50 11:54:50 :param redirect: 11:54:50 If True, automatically handle redirects (status codes 301, 302, 11:54:50 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:54:50 will disable redirect, too. 11:54:50 11:54:50 :param assert_same_host: 11:54:50 If ``True``, will make sure that the host of the pool requests is 11:54:50 consistent else will raise HostChangedError. When ``False``, you can 11:54:50 use the pool on an HTTP proxy and request foreign hosts. 11:54:50 11:54:50 :param timeout: 11:54:50 If specified, overrides the default timeout for this one 11:54:50 request. It may be a float (in seconds) or an instance of 11:54:50 :class:`urllib3.util.Timeout`. 11:54:50 11:54:50 :param pool_timeout: 11:54:50 If set and the pool is set to block=True, then this method will 11:54:50 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:54:50 connection is available within the time period. 11:54:50 11:54:50 :param bool preload_content: 11:54:50 If True, the response's body will be preloaded into memory. 11:54:50 11:54:50 :param bool decode_content: 11:54:50 If True, will attempt to decode the body based on the 11:54:50 'content-encoding' header. 11:54:50 11:54:50 :param release_conn: 11:54:50 If False, then the urlopen call will not release the connection 11:54:50 back into the pool once a response is received (but will release if 11:54:50 you read the entire contents of the response such as when 11:54:50 `preload_content=True`). This is useful if you're not preloading 11:54:50 the response's content immediately. You will need to call 11:54:50 ``r.release_conn()`` on the response ``r`` to return the connection 11:54:50 back into the pool. If None, it takes the value of ``preload_content`` 11:54:50 which defaults to ``True``. 11:54:50 11:54:50 :param bool chunked: 11:54:50 If True, urllib3 will send the body using chunked transfer 11:54:50 encoding. Otherwise, urllib3 will send the body using the standard 11:54:50 content-length form. Defaults to False. 11:54:50 11:54:50 :param int body_pos: 11:54:50 Position to seek to in file-like body in the event of a retry or 11:54:50 redirect. Typically this won't need to be set because urllib3 will 11:54:50 auto-populate the value when needed. 11:54:50 """ 11:54:50 parsed_url = parse_url(url) 11:54:50 destination_scheme = parsed_url.scheme 11:54:50 11:54:50 if headers is None: 11:54:50 headers = self.headers 11:54:50 11:54:50 if not isinstance(retries, Retry): 11:54:50 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:54:50 11:54:50 if release_conn is None: 11:54:50 release_conn = preload_content 11:54:50 11:54:50 # Check host 11:54:50 if assert_same_host and not self.is_same_host(url): 11:54:50 raise HostChangedError(self, url, retries) 11:54:50 11:54:50 # Ensure that the URL we're connecting to is properly encoded 11:54:50 if url.startswith("/"): 11:54:50 url = to_str(_encode_target(url)) 11:54:50 else: 11:54:50 url = to_str(parsed_url.url) 11:54:50 11:54:50 conn = None 11:54:50 11:54:50 # Track whether `conn` needs to be released before 11:54:50 # returning/raising/recursing. Update this variable if necessary, and 11:54:50 # leave `release_conn` constant throughout the function. That way, if 11:54:50 # the function recurses, the original value of `release_conn` will be 11:54:50 # passed down into the recursive call, and its value will be respected. 11:54:50 # 11:54:50 # See issue #651 [1] for details. 11:54:50 # 11:54:50 # [1] 11:54:50 release_this_conn = release_conn 11:54:50 11:54:50 http_tunnel_required = connection_requires_http_tunnel( 11:54:50 self.proxy, self.proxy_config, destination_scheme 11:54:50 ) 11:54:50 11:54:50 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:54:50 # have to copy the headers dict so we can safely change it without those 11:54:50 # changes being reflected in anyone else's copy. 11:54:50 if not http_tunnel_required: 11:54:50 headers = headers.copy() # type: ignore[attr-defined] 11:54:50 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:54:50 11:54:50 # Must keep the exception bound to a separate variable or else Python 3 11:54:50 # complains about UnboundLocalError. 11:54:50 err = None 11:54:50 11:54:50 # Keep track of whether we cleanly exited the except block. This 11:54:50 # ensures we do proper cleanup in finally. 11:54:50 clean_exit = False 11:54:50 11:54:50 # Rewind body position, if needed. Record current position 11:54:50 # for future rewinds in the event of a redirect/retry. 11:54:50 body_pos = set_file_position(body, body_pos) 11:54:50 11:54:50 try: 11:54:50 # Request a connection from the queue. 11:54:50 timeout_obj = self._get_timeout(timeout) 11:54:50 conn = self._get_conn(timeout=pool_timeout) 11:54:50 11:54:50 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:54:50 11:54:50 # Is this a closed/new connection that requires CONNECT tunnelling? 11:54:50 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:54:50 try: 11:54:50 self._prepare_proxy(conn) 11:54:50 except (BaseSSLError, OSError, SocketTimeout) as e: 11:54:50 self._raise_timeout( 11:54:50 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:54:50 ) 11:54:50 raise 11:54:50 11:54:50 # If we're going to release the connection in ``finally:``, then 11:54:50 # the response doesn't need to know about the connection. Otherwise 11:54:50 # it will also try to release it and we'll have a double-release 11:54:50 # mess. 11:54:50 response_conn = conn if not release_conn else None 11:54:50 11:54:50 # Make the request on the HTTPConnection object 11:54:50 > response = self._make_request( 11:54:50 conn, 11:54:50 method, 11:54:50 url, 11:54:50 timeout=timeout_obj, 11:54:50 body=body, 11:54:50 headers=headers, 11:54:50 chunked=chunked, 11:54:50 retries=retries, 11:54:50 response_conn=response_conn, 11:54:50 preload_content=preload_content, 11:54:50 decode_content=decode_content, 11:54:50 **response_kw, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:54:50 conn.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 11:54:50 self.endheaders() 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:54:50 self._send_output(message_body, encode_chunked=encode_chunked) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:54:50 self.send(msg) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:54:50 self.connect() 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 11:54:50 self.sock = self._new_conn() 11:54:50 ^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 except socket.gaierror as e: 11:54:50 raise NameResolutionError(self.host, self, e) from e 11:54:50 except SocketTimeout as e: 11:54:50 raise ConnectTimeoutError( 11:54:50 self, 11:54:50 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:54:50 ) from e 11:54:50 11:54:50 except OSError as e: 11:54:50 > raise NewConnectionError( 11:54:50 self, f"Failed to establish a new connection: {e}" 11:54:50 ) from e 11:54:50 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 > resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:54:50 retries = retries.increment( 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=DEG1-TTP-TXRX' 11:54:50 response = None 11:54:50 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 11:54:50 _pool = 11:54:50 _stacktrace = 11:54:50 11:54:50 def increment( 11:54:50 self, 11:54:50 method: str | None = None, 11:54:50 url: str | None = None, 11:54:50 response: BaseHTTPResponse | None = None, 11:54:50 error: Exception | None = None, 11:54:50 _pool: ConnectionPool | None = None, 11:54:50 _stacktrace: TracebackType | None = None, 11:54:50 ) -> Self: 11:54:50 """Return a new Retry object with incremented retry counters. 11:54:50 11:54:50 :param response: A response object, or None, if the server did not 11:54:50 return a response. 11:54:50 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:54:50 :param Exception error: An error encountered during the request, or 11:54:50 None if the response was received successfully. 11:54:50 11:54:50 :return: A new ``Retry`` object. 11:54:50 """ 11:54:50 if self.total is False and error: 11:54:50 # Disabled, indicate to re-raise the error. 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 11:54:50 total = self.total 11:54:50 if total is not None: 11:54:50 total -= 1 11:54:50 11:54:50 connect = self.connect 11:54:50 read = self.read 11:54:50 redirect = self.redirect 11:54:50 status_count = self.status 11:54:50 other = self.other 11:54:50 cause = "unknown" 11:54:50 status = None 11:54:50 redirect_location = None 11:54:50 11:54:50 if error and self._is_connection_error(error): 11:54:50 # Connect retry? 11:54:50 if connect is False: 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif connect is not None: 11:54:50 connect -= 1 11:54:50 11:54:50 elif error and self._is_read_error(error): 11:54:50 # Read retry? 11:54:50 if read is False or method is None or not self._is_method_retryable(method): 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif read is not None: 11:54:50 read -= 1 11:54:50 11:54:50 elif error: 11:54:50 # Other retry? 11:54:50 if other is not None: 11:54:50 other -= 1 11:54:50 11:54:50 elif response and response.get_redirect_location(): 11:54:50 # Redirect retry? 11:54:50 if redirect is not None: 11:54:50 redirect -= 1 11:54:50 cause = "too many redirects" 11:54:50 response_redirect_location = response.get_redirect_location() 11:54:50 if response_redirect_location: 11:54:50 redirect_location = response_redirect_location 11:54:50 status = response.status 11:54:50 11:54:50 else: 11:54:50 # Incrementing because of a server error like a 500 in 11:54:50 # status_forcelist and the given method is in the allowed_methods 11:54:50 cause = ResponseError.GENERIC_ERROR 11:54:50 if response and response.status: 11:54:50 if status_count is not None: 11:54:50 status_count -= 1 11:54:50 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:54:50 status = response.status 11:54:50 11:54:50 history = self.history + ( 11:54:50 RequestHistory(method, url, error, status, redirect_location), 11:54:50 ) 11:54:50 11:54:50 new_retry = self.new( 11:54:50 total=total, 11:54:50 connect=connect, 11:54:50 read=read, 11:54:50 redirect=redirect, 11:54:50 status=status_count, 11:54:50 other=other, 11:54:50 history=history, 11:54:50 ) 11:54:50 11:54:50 if new_retry.is_exhausted(): 11:54:50 reason = error or ResponseError(cause) 11:54:50 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=DEG1-TTP-TXRX (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 11:54:50 11:54:50 During handling of the above exception, another exception occurred: 11:54:50 11:54:50 self = 11:54:50 11:54:50 def test_04_rdm_portmapping_DEG1_TTP_TXRX(self): 11:54:50 > response = test_utils.get_portmapping_node_attr("ROADMA01", "mapping", "DEG1-TTP-TXRX") 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 11:54:50 transportpce_tests/1.2.1/test01_portmapping.py:73: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 11:54:50 response = get_request(target_url) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 transportpce_tests/common/test_utils.py:117: in get_request 11:54:50 return requests.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 11:54:50 return session.request(method=method, url=url, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:54:50 resp = self.send(prep, **send_kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:54:50 r = adapter.send(request, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 except (ProtocolError, OSError) as err: 11:54:50 raise ConnectionError(err, request=request) 11:54:50 11:54:50 except MaxRetryError as e: 11:54:50 if isinstance(e.reason, ConnectTimeoutError): 11:54:50 # TODO: Remove this in 3.0.0: see #2811 11:54:50 if not isinstance(e.reason, NewConnectionError): 11:54:50 raise ConnectTimeout(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, ResponseError): 11:54:50 raise RetryError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _ProxyError): 11:54:50 raise ProxyError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _SSLError): 11:54:50 # This branch is for urllib3 v1.22 and later. 11:54:50 raise SSLError(e, request=request) 11:54:50 11:54:50 > raise ConnectionError(e, request=request) 11:54:50 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=DEG1-TTP-TXRX (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 11:54:50 ----------------------------- Captured stdout call ----------------------------- 11:54:50 execution of test_04_rdm_portmapping_DEG1_TTP_TXRX 11:54:50 ______ TestTransportPCEPortmapping.test_05_rdm_portmapping_SRG1_PP7_TXRX _______ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 > sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:54:50 raise err 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 address = ('localhost', 8191), timeout = 30, source_address = None 11:54:50 socket_options = [(6, 1, 1)] 11:54:50 11:54:50 def create_connection( 11:54:50 address: tuple[str, int], 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 source_address: tuple[str, int] | None = None, 11:54:50 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:54:50 ) -> socket.socket: 11:54:50 """Connect to *address* and return the socket object. 11:54:50 11:54:50 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:54:50 port)``) and return the socket object. Passing the optional 11:54:50 *timeout* parameter will set the timeout on the socket instance 11:54:50 before attempting to connect. If no *timeout* is supplied, the 11:54:50 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:54:50 is used. If *source_address* is set it must be a tuple of (host, port) 11:54:50 for the socket to bind as a source address before making the connection. 11:54:50 An host of '' or port 0 tells the OS to use the default. 11:54:50 """ 11:54:50 11:54:50 host, port = address 11:54:50 if host.startswith("["): 11:54:50 host = host.strip("[]") 11:54:50 err = None 11:54:50 11:54:50 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:54:50 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:54:50 # The original create_connection function always returns all records. 11:54:50 family = allowed_gai_family() 11:54:50 11:54:50 try: 11:54:50 host.encode("idna") 11:54:50 except UnicodeError: 11:54:50 raise LocationParseError(f"'{host}', label empty or too long") from None 11:54:50 11:54:50 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:54:50 af, socktype, proto, canonname, sa = res 11:54:50 sock = None 11:54:50 try: 11:54:50 sock = socket.socket(af, socktype, proto) 11:54:50 11:54:50 # If provided, set socket level options before connecting. 11:54:50 _set_socket_options(sock, socket_options) 11:54:50 11:54:50 if timeout is not _DEFAULT_TIMEOUT: 11:54:50 sock.settimeout(timeout) 11:54:50 if source_address: 11:54:50 sock.bind(source_address) 11:54:50 > sock.connect(sa) 11:54:50 E ConnectionRefusedError: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG1-PP7-TXRX' 11:54:50 body = None 11:54:50 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:54:50 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 redirect = False, assert_same_host = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:54:50 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:54:50 decode_content = False, response_kw = {} 11:54:50 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG1-PP7-TXRX', query=None, fragment=None) 11:54:50 destination_scheme = None, conn = None, release_this_conn = True 11:54:50 http_tunnel_required = False, err = None, clean_exit = False 11:54:50 11:54:50 def urlopen( # type: ignore[override] 11:54:50 self, 11:54:50 method: str, 11:54:50 url: str, 11:54:50 body: _TYPE_BODY | None = None, 11:54:50 headers: typing.Mapping[str, str] | None = None, 11:54:50 retries: Retry | bool | int | None = None, 11:54:50 redirect: bool = True, 11:54:50 assert_same_host: bool = True, 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 pool_timeout: int | None = None, 11:54:50 release_conn: bool | None = None, 11:54:50 chunked: bool = False, 11:54:50 body_pos: _TYPE_BODY_POSITION | None = None, 11:54:50 preload_content: bool = True, 11:54:50 decode_content: bool = True, 11:54:50 **response_kw: typing.Any, 11:54:50 ) -> BaseHTTPResponse: 11:54:50 """ 11:54:50 Get a connection from the pool and perform an HTTP request. This is the 11:54:50 lowest level call for making a request, so you'll need to specify all 11:54:50 the raw details. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 More commonly, it's appropriate to use a convenience method 11:54:50 such as :meth:`request`. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 `release_conn` will only behave as expected if 11:54:50 `preload_content=False` because we want to make 11:54:50 `preload_content=False` the default behaviour someday soon without 11:54:50 breaking backwards compatibility. 11:54:50 11:54:50 :param method: 11:54:50 HTTP request method (such as GET, POST, PUT, etc.) 11:54:50 11:54:50 :param url: 11:54:50 The URL to perform the request on. 11:54:50 11:54:50 :param body: 11:54:50 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:54:50 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:54:50 11:54:50 :param headers: 11:54:50 Dictionary of custom headers to send, such as User-Agent, 11:54:50 If-None-Match, etc. If None, pool headers are used. If provided, 11:54:50 these headers completely replace any pool-specific headers. 11:54:50 11:54:50 :param retries: 11:54:50 Configure the number of retries to allow before raising a 11:54:50 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:54:50 11:54:50 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:54:50 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:54:50 over different types of retries. 11:54:50 Pass an integer number to retry connection errors that many times, 11:54:50 but no other types of errors. Pass zero to never retry. 11:54:50 11:54:50 If ``False``, then retries are disabled and any exception is raised 11:54:50 immediately. Also, instead of raising a MaxRetryError on redirects, 11:54:50 the redirect response will be returned. 11:54:50 11:54:50 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:54:50 11:54:50 :param redirect: 11:54:50 If True, automatically handle redirects (status codes 301, 302, 11:54:50 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:54:50 will disable redirect, too. 11:54:50 11:54:50 :param assert_same_host: 11:54:50 If ``True``, will make sure that the host of the pool requests is 11:54:50 consistent else will raise HostChangedError. When ``False``, you can 11:54:50 use the pool on an HTTP proxy and request foreign hosts. 11:54:50 11:54:50 :param timeout: 11:54:50 If specified, overrides the default timeout for this one 11:54:50 request. It may be a float (in seconds) or an instance of 11:54:50 :class:`urllib3.util.Timeout`. 11:54:50 11:54:50 :param pool_timeout: 11:54:50 If set and the pool is set to block=True, then this method will 11:54:50 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:54:50 connection is available within the time period. 11:54:50 11:54:50 :param bool preload_content: 11:54:50 If True, the response's body will be preloaded into memory. 11:54:50 11:54:50 :param bool decode_content: 11:54:50 If True, will attempt to decode the body based on the 11:54:50 'content-encoding' header. 11:54:50 11:54:50 :param release_conn: 11:54:50 If False, then the urlopen call will not release the connection 11:54:50 back into the pool once a response is received (but will release if 11:54:50 you read the entire contents of the response such as when 11:54:50 `preload_content=True`). This is useful if you're not preloading 11:54:50 the response's content immediately. You will need to call 11:54:50 ``r.release_conn()`` on the response ``r`` to return the connection 11:54:50 back into the pool. If None, it takes the value of ``preload_content`` 11:54:50 which defaults to ``True``. 11:54:50 11:54:50 :param bool chunked: 11:54:50 If True, urllib3 will send the body using chunked transfer 11:54:50 encoding. Otherwise, urllib3 will send the body using the standard 11:54:50 content-length form. Defaults to False. 11:54:50 11:54:50 :param int body_pos: 11:54:50 Position to seek to in file-like body in the event of a retry or 11:54:50 redirect. Typically this won't need to be set because urllib3 will 11:54:50 auto-populate the value when needed. 11:54:50 """ 11:54:50 parsed_url = parse_url(url) 11:54:50 destination_scheme = parsed_url.scheme 11:54:50 11:54:50 if headers is None: 11:54:50 headers = self.headers 11:54:50 11:54:50 if not isinstance(retries, Retry): 11:54:50 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:54:50 11:54:50 if release_conn is None: 11:54:50 release_conn = preload_content 11:54:50 11:54:50 # Check host 11:54:50 if assert_same_host and not self.is_same_host(url): 11:54:50 raise HostChangedError(self, url, retries) 11:54:50 11:54:50 # Ensure that the URL we're connecting to is properly encoded 11:54:50 if url.startswith("/"): 11:54:50 url = to_str(_encode_target(url)) 11:54:50 else: 11:54:50 url = to_str(parsed_url.url) 11:54:50 11:54:50 conn = None 11:54:50 11:54:50 # Track whether `conn` needs to be released before 11:54:50 # returning/raising/recursing. Update this variable if necessary, and 11:54:50 # leave `release_conn` constant throughout the function. That way, if 11:54:50 # the function recurses, the original value of `release_conn` will be 11:54:50 # passed down into the recursive call, and its value will be respected. 11:54:50 # 11:54:50 # See issue #651 [1] for details. 11:54:50 # 11:54:50 # [1] 11:54:50 release_this_conn = release_conn 11:54:50 11:54:50 http_tunnel_required = connection_requires_http_tunnel( 11:54:50 self.proxy, self.proxy_config, destination_scheme 11:54:50 ) 11:54:50 11:54:50 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:54:50 # have to copy the headers dict so we can safely change it without those 11:54:50 # changes being reflected in anyone else's copy. 11:54:50 if not http_tunnel_required: 11:54:50 headers = headers.copy() # type: ignore[attr-defined] 11:54:50 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:54:50 11:54:50 # Must keep the exception bound to a separate variable or else Python 3 11:54:50 # complains about UnboundLocalError. 11:54:50 err = None 11:54:50 11:54:50 # Keep track of whether we cleanly exited the except block. This 11:54:50 # ensures we do proper cleanup in finally. 11:54:50 clean_exit = False 11:54:50 11:54:50 # Rewind body position, if needed. Record current position 11:54:50 # for future rewinds in the event of a redirect/retry. 11:54:50 body_pos = set_file_position(body, body_pos) 11:54:50 11:54:50 try: 11:54:50 # Request a connection from the queue. 11:54:50 timeout_obj = self._get_timeout(timeout) 11:54:50 conn = self._get_conn(timeout=pool_timeout) 11:54:50 11:54:50 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:54:50 11:54:50 # Is this a closed/new connection that requires CONNECT tunnelling? 11:54:50 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:54:50 try: 11:54:50 self._prepare_proxy(conn) 11:54:50 except (BaseSSLError, OSError, SocketTimeout) as e: 11:54:50 self._raise_timeout( 11:54:50 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:54:50 ) 11:54:50 raise 11:54:50 11:54:50 # If we're going to release the connection in ``finally:``, then 11:54:50 # the response doesn't need to know about the connection. Otherwise 11:54:50 # it will also try to release it and we'll have a double-release 11:54:50 # mess. 11:54:50 response_conn = conn if not release_conn else None 11:54:50 11:54:50 # Make the request on the HTTPConnection object 11:54:50 > response = self._make_request( 11:54:50 conn, 11:54:50 method, 11:54:50 url, 11:54:50 timeout=timeout_obj, 11:54:50 body=body, 11:54:50 headers=headers, 11:54:50 chunked=chunked, 11:54:50 retries=retries, 11:54:50 response_conn=response_conn, 11:54:50 preload_content=preload_content, 11:54:50 decode_content=decode_content, 11:54:50 **response_kw, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:54:50 conn.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 11:54:50 self.endheaders() 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:54:50 self._send_output(message_body, encode_chunked=encode_chunked) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:54:50 self.send(msg) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:54:50 self.connect() 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 11:54:50 self.sock = self._new_conn() 11:54:50 ^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 except socket.gaierror as e: 11:54:50 raise NameResolutionError(self.host, self, e) from e 11:54:50 except SocketTimeout as e: 11:54:50 raise ConnectTimeoutError( 11:54:50 self, 11:54:50 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:54:50 ) from e 11:54:50 11:54:50 except OSError as e: 11:54:50 > raise NewConnectionError( 11:54:50 self, f"Failed to establish a new connection: {e}" 11:54:50 ) from e 11:54:50 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 > resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:54:50 retries = retries.increment( 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG1-PP7-TXRX' 11:54:50 response = None 11:54:50 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 11:54:50 _pool = 11:54:50 _stacktrace = 11:54:50 11:54:50 def increment( 11:54:50 self, 11:54:50 method: str | None = None, 11:54:50 url: str | None = None, 11:54:50 response: BaseHTTPResponse | None = None, 11:54:50 error: Exception | None = None, 11:54:50 _pool: ConnectionPool | None = None, 11:54:50 _stacktrace: TracebackType | None = None, 11:54:50 ) -> Self: 11:54:50 """Return a new Retry object with incremented retry counters. 11:54:50 11:54:50 :param response: A response object, or None, if the server did not 11:54:50 return a response. 11:54:50 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:54:50 :param Exception error: An error encountered during the request, or 11:54:50 None if the response was received successfully. 11:54:50 11:54:50 :return: A new ``Retry`` object. 11:54:50 """ 11:54:50 if self.total is False and error: 11:54:50 # Disabled, indicate to re-raise the error. 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 11:54:50 total = self.total 11:54:50 if total is not None: 11:54:50 total -= 1 11:54:50 11:54:50 connect = self.connect 11:54:50 read = self.read 11:54:50 redirect = self.redirect 11:54:50 status_count = self.status 11:54:50 other = self.other 11:54:50 cause = "unknown" 11:54:50 status = None 11:54:50 redirect_location = None 11:54:50 11:54:50 if error and self._is_connection_error(error): 11:54:50 # Connect retry? 11:54:50 if connect is False: 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif connect is not None: 11:54:50 connect -= 1 11:54:50 11:54:50 elif error and self._is_read_error(error): 11:54:50 # Read retry? 11:54:50 if read is False or method is None or not self._is_method_retryable(method): 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif read is not None: 11:54:50 read -= 1 11:54:50 11:54:50 elif error: 11:54:50 # Other retry? 11:54:50 if other is not None: 11:54:50 other -= 1 11:54:50 11:54:50 elif response and response.get_redirect_location(): 11:54:50 # Redirect retry? 11:54:50 if redirect is not None: 11:54:50 redirect -= 1 11:54:50 cause = "too many redirects" 11:54:50 response_redirect_location = response.get_redirect_location() 11:54:50 if response_redirect_location: 11:54:50 redirect_location = response_redirect_location 11:54:50 status = response.status 11:54:50 11:54:50 else: 11:54:50 # Incrementing because of a server error like a 500 in 11:54:50 # status_forcelist and the given method is in the allowed_methods 11:54:50 cause = ResponseError.GENERIC_ERROR 11:54:50 if response and response.status: 11:54:50 if status_count is not None: 11:54:50 status_count -= 1 11:54:50 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:54:50 status = response.status 11:54:50 11:54:50 history = self.history + ( 11:54:50 RequestHistory(method, url, error, status, redirect_location), 11:54:50 ) 11:54:50 11:54:50 new_retry = self.new( 11:54:50 total=total, 11:54:50 connect=connect, 11:54:50 read=read, 11:54:50 redirect=redirect, 11:54:50 status=status_count, 11:54:50 other=other, 11:54:50 history=history, 11:54:50 ) 11:54:50 11:54:50 if new_retry.is_exhausted(): 11:54:50 reason = error or ResponseError(cause) 11:54:50 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG1-PP7-TXRX (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 11:54:50 11:54:50 During handling of the above exception, another exception occurred: 11:54:50 11:54:50 self = 11:54:50 11:54:50 def test_05_rdm_portmapping_SRG1_PP7_TXRX(self): 11:54:50 > response = test_utils.get_portmapping_node_attr("ROADMA01", "mapping", "SRG1-PP7-TXRX") 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 11:54:50 transportpce_tests/1.2.1/test01_portmapping.py:82: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 11:54:50 response = get_request(target_url) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 transportpce_tests/common/test_utils.py:117: in get_request 11:54:50 return requests.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 11:54:50 return session.request(method=method, url=url, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:54:50 resp = self.send(prep, **send_kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:54:50 r = adapter.send(request, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 except (ProtocolError, OSError) as err: 11:54:50 raise ConnectionError(err, request=request) 11:54:50 11:54:50 except MaxRetryError as e: 11:54:50 if isinstance(e.reason, ConnectTimeoutError): 11:54:50 # TODO: Remove this in 3.0.0: see #2811 11:54:50 if not isinstance(e.reason, NewConnectionError): 11:54:50 raise ConnectTimeout(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, ResponseError): 11:54:50 raise RetryError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _ProxyError): 11:54:50 raise ProxyError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _SSLError): 11:54:50 # This branch is for urllib3 v1.22 and later. 11:54:50 raise SSLError(e, request=request) 11:54:50 11:54:50 > raise ConnectionError(e, request=request) 11:54:50 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG1-PP7-TXRX (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 11:54:50 ----------------------------- Captured stdout call ----------------------------- 11:54:50 execution of test_05_rdm_portmapping_SRG1_PP7_TXRX 11:54:50 ______ TestTransportPCEPortmapping.test_06_rdm_portmapping_SRG3_PP1_TXRX _______ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 > sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:54:50 raise err 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 address = ('localhost', 8191), timeout = 30, source_address = None 11:54:50 socket_options = [(6, 1, 1)] 11:54:50 11:54:50 def create_connection( 11:54:50 address: tuple[str, int], 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 source_address: tuple[str, int] | None = None, 11:54:50 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:54:50 ) -> socket.socket: 11:54:50 """Connect to *address* and return the socket object. 11:54:50 11:54:50 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:54:50 port)``) and return the socket object. Passing the optional 11:54:50 *timeout* parameter will set the timeout on the socket instance 11:54:50 before attempting to connect. If no *timeout* is supplied, the 11:54:50 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:54:50 is used. If *source_address* is set it must be a tuple of (host, port) 11:54:50 for the socket to bind as a source address before making the connection. 11:54:50 An host of '' or port 0 tells the OS to use the default. 11:54:50 """ 11:54:50 11:54:50 host, port = address 11:54:50 if host.startswith("["): 11:54:50 host = host.strip("[]") 11:54:50 err = None 11:54:50 11:54:50 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:54:50 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:54:50 # The original create_connection function always returns all records. 11:54:50 family = allowed_gai_family() 11:54:50 11:54:50 try: 11:54:50 host.encode("idna") 11:54:50 except UnicodeError: 11:54:50 raise LocationParseError(f"'{host}', label empty or too long") from None 11:54:50 11:54:50 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:54:50 af, socktype, proto, canonname, sa = res 11:54:50 sock = None 11:54:50 try: 11:54:50 sock = socket.socket(af, socktype, proto) 11:54:50 11:54:50 # If provided, set socket level options before connecting. 11:54:50 _set_socket_options(sock, socket_options) 11:54:50 11:54:50 if timeout is not _DEFAULT_TIMEOUT: 11:54:50 sock.settimeout(timeout) 11:54:50 if source_address: 11:54:50 sock.bind(source_address) 11:54:50 > sock.connect(sa) 11:54:50 E ConnectionRefusedError: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG3-PP1-TXRX' 11:54:50 body = None 11:54:50 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:54:50 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 redirect = False, assert_same_host = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:54:50 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:54:50 decode_content = False, response_kw = {} 11:54:50 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG3-PP1-TXRX', query=None, fragment=None) 11:54:50 destination_scheme = None, conn = None, release_this_conn = True 11:54:50 http_tunnel_required = False, err = None, clean_exit = False 11:54:50 11:54:50 def urlopen( # type: ignore[override] 11:54:50 self, 11:54:50 method: str, 11:54:50 url: str, 11:54:50 body: _TYPE_BODY | None = None, 11:54:50 headers: typing.Mapping[str, str] | None = None, 11:54:50 retries: Retry | bool | int | None = None, 11:54:50 redirect: bool = True, 11:54:50 assert_same_host: bool = True, 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 pool_timeout: int | None = None, 11:54:50 release_conn: bool | None = None, 11:54:50 chunked: bool = False, 11:54:50 body_pos: _TYPE_BODY_POSITION | None = None, 11:54:50 preload_content: bool = True, 11:54:50 decode_content: bool = True, 11:54:50 **response_kw: typing.Any, 11:54:50 ) -> BaseHTTPResponse: 11:54:50 """ 11:54:50 Get a connection from the pool and perform an HTTP request. This is the 11:54:50 lowest level call for making a request, so you'll need to specify all 11:54:50 the raw details. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 More commonly, it's appropriate to use a convenience method 11:54:50 such as :meth:`request`. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 `release_conn` will only behave as expected if 11:54:50 `preload_content=False` because we want to make 11:54:50 `preload_content=False` the default behaviour someday soon without 11:54:50 breaking backwards compatibility. 11:54:50 11:54:50 :param method: 11:54:50 HTTP request method (such as GET, POST, PUT, etc.) 11:54:50 11:54:50 :param url: 11:54:50 The URL to perform the request on. 11:54:50 11:54:50 :param body: 11:54:50 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:54:50 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:54:50 11:54:50 :param headers: 11:54:50 Dictionary of custom headers to send, such as User-Agent, 11:54:50 If-None-Match, etc. If None, pool headers are used. If provided, 11:54:50 these headers completely replace any pool-specific headers. 11:54:50 11:54:50 :param retries: 11:54:50 Configure the number of retries to allow before raising a 11:54:50 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:54:50 11:54:50 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:54:50 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:54:50 over different types of retries. 11:54:50 Pass an integer number to retry connection errors that many times, 11:54:50 but no other types of errors. Pass zero to never retry. 11:54:50 11:54:50 If ``False``, then retries are disabled and any exception is raised 11:54:50 immediately. Also, instead of raising a MaxRetryError on redirects, 11:54:50 the redirect response will be returned. 11:54:50 11:54:50 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:54:50 11:54:50 :param redirect: 11:54:50 If True, automatically handle redirects (status codes 301, 302, 11:54:50 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:54:50 will disable redirect, too. 11:54:50 11:54:50 :param assert_same_host: 11:54:50 If ``True``, will make sure that the host of the pool requests is 11:54:50 consistent else will raise HostChangedError. When ``False``, you can 11:54:50 use the pool on an HTTP proxy and request foreign hosts. 11:54:50 11:54:50 :param timeout: 11:54:50 If specified, overrides the default timeout for this one 11:54:50 request. It may be a float (in seconds) or an instance of 11:54:50 :class:`urllib3.util.Timeout`. 11:54:50 11:54:50 :param pool_timeout: 11:54:50 If set and the pool is set to block=True, then this method will 11:54:50 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:54:50 connection is available within the time period. 11:54:50 11:54:50 :param bool preload_content: 11:54:50 If True, the response's body will be preloaded into memory. 11:54:50 11:54:50 :param bool decode_content: 11:54:50 If True, will attempt to decode the body based on the 11:54:50 'content-encoding' header. 11:54:50 11:54:50 :param release_conn: 11:54:50 If False, then the urlopen call will not release the connection 11:54:50 back into the pool once a response is received (but will release if 11:54:50 you read the entire contents of the response such as when 11:54:50 `preload_content=True`). This is useful if you're not preloading 11:54:50 the response's content immediately. You will need to call 11:54:50 ``r.release_conn()`` on the response ``r`` to return the connection 11:54:50 back into the pool. If None, it takes the value of ``preload_content`` 11:54:50 which defaults to ``True``. 11:54:50 11:54:50 :param bool chunked: 11:54:50 If True, urllib3 will send the body using chunked transfer 11:54:50 encoding. Otherwise, urllib3 will send the body using the standard 11:54:50 content-length form. Defaults to False. 11:54:50 11:54:50 :param int body_pos: 11:54:50 Position to seek to in file-like body in the event of a retry or 11:54:50 redirect. Typically this won't need to be set because urllib3 will 11:54:50 auto-populate the value when needed. 11:54:50 """ 11:54:50 parsed_url = parse_url(url) 11:54:50 destination_scheme = parsed_url.scheme 11:54:50 11:54:50 if headers is None: 11:54:50 headers = self.headers 11:54:50 11:54:50 if not isinstance(retries, Retry): 11:54:50 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:54:50 11:54:50 if release_conn is None: 11:54:50 release_conn = preload_content 11:54:50 11:54:50 # Check host 11:54:50 if assert_same_host and not self.is_same_host(url): 11:54:50 raise HostChangedError(self, url, retries) 11:54:50 11:54:50 # Ensure that the URL we're connecting to is properly encoded 11:54:50 if url.startswith("/"): 11:54:50 url = to_str(_encode_target(url)) 11:54:50 else: 11:54:50 url = to_str(parsed_url.url) 11:54:50 11:54:50 conn = None 11:54:50 11:54:50 # Track whether `conn` needs to be released before 11:54:50 # returning/raising/recursing. Update this variable if necessary, and 11:54:50 # leave `release_conn` constant throughout the function. That way, if 11:54:50 # the function recurses, the original value of `release_conn` will be 11:54:50 # passed down into the recursive call, and its value will be respected. 11:54:50 # 11:54:50 # See issue #651 [1] for details. 11:54:50 # 11:54:50 # [1] 11:54:50 release_this_conn = release_conn 11:54:50 11:54:50 http_tunnel_required = connection_requires_http_tunnel( 11:54:50 self.proxy, self.proxy_config, destination_scheme 11:54:50 ) 11:54:50 11:54:50 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:54:50 # have to copy the headers dict so we can safely change it without those 11:54:50 # changes being reflected in anyone else's copy. 11:54:50 if not http_tunnel_required: 11:54:50 headers = headers.copy() # type: ignore[attr-defined] 11:54:50 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:54:50 11:54:50 # Must keep the exception bound to a separate variable or else Python 3 11:54:50 # complains about UnboundLocalError. 11:54:50 err = None 11:54:50 11:54:50 # Keep track of whether we cleanly exited the except block. This 11:54:50 # ensures we do proper cleanup in finally. 11:54:50 clean_exit = False 11:54:50 11:54:50 # Rewind body position, if needed. Record current position 11:54:50 # for future rewinds in the event of a redirect/retry. 11:54:50 body_pos = set_file_position(body, body_pos) 11:54:50 11:54:50 try: 11:54:50 # Request a connection from the queue. 11:54:50 timeout_obj = self._get_timeout(timeout) 11:54:50 conn = self._get_conn(timeout=pool_timeout) 11:54:50 11:54:50 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:54:50 11:54:50 # Is this a closed/new connection that requires CONNECT tunnelling? 11:54:50 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:54:50 try: 11:54:50 self._prepare_proxy(conn) 11:54:50 except (BaseSSLError, OSError, SocketTimeout) as e: 11:54:50 self._raise_timeout( 11:54:50 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:54:50 ) 11:54:50 raise 11:54:50 11:54:50 # If we're going to release the connection in ``finally:``, then 11:54:50 # the response doesn't need to know about the connection. Otherwise 11:54:50 # it will also try to release it and we'll have a double-release 11:54:50 # mess. 11:54:50 response_conn = conn if not release_conn else None 11:54:50 11:54:50 # Make the request on the HTTPConnection object 11:54:50 > response = self._make_request( 11:54:50 conn, 11:54:50 method, 11:54:50 url, 11:54:50 timeout=timeout_obj, 11:54:50 body=body, 11:54:50 headers=headers, 11:54:50 chunked=chunked, 11:54:50 retries=retries, 11:54:50 response_conn=response_conn, 11:54:50 preload_content=preload_content, 11:54:50 decode_content=decode_content, 11:54:50 **response_kw, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:54:50 conn.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 11:54:50 self.endheaders() 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:54:50 self._send_output(message_body, encode_chunked=encode_chunked) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:54:50 self.send(msg) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:54:50 self.connect() 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 11:54:50 self.sock = self._new_conn() 11:54:50 ^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 except socket.gaierror as e: 11:54:50 raise NameResolutionError(self.host, self, e) from e 11:54:50 except SocketTimeout as e: 11:54:50 raise ConnectTimeoutError( 11:54:50 self, 11:54:50 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:54:50 ) from e 11:54:50 11:54:50 except OSError as e: 11:54:50 > raise NewConnectionError( 11:54:50 self, f"Failed to establish a new connection: {e}" 11:54:50 ) from e 11:54:50 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 > resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:54:50 retries = retries.increment( 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG3-PP1-TXRX' 11:54:50 response = None 11:54:50 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 11:54:50 _pool = 11:54:50 _stacktrace = 11:54:50 11:54:50 def increment( 11:54:50 self, 11:54:50 method: str | None = None, 11:54:50 url: str | None = None, 11:54:50 response: BaseHTTPResponse | None = None, 11:54:50 error: Exception | None = None, 11:54:50 _pool: ConnectionPool | None = None, 11:54:50 _stacktrace: TracebackType | None = None, 11:54:50 ) -> Self: 11:54:50 """Return a new Retry object with incremented retry counters. 11:54:50 11:54:50 :param response: A response object, or None, if the server did not 11:54:50 return a response. 11:54:50 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:54:50 :param Exception error: An error encountered during the request, or 11:54:50 None if the response was received successfully. 11:54:50 11:54:50 :return: A new ``Retry`` object. 11:54:50 """ 11:54:50 if self.total is False and error: 11:54:50 # Disabled, indicate to re-raise the error. 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 11:54:50 total = self.total 11:54:50 if total is not None: 11:54:50 total -= 1 11:54:50 11:54:50 connect = self.connect 11:54:50 read = self.read 11:54:50 redirect = self.redirect 11:54:50 status_count = self.status 11:54:50 other = self.other 11:54:50 cause = "unknown" 11:54:50 status = None 11:54:50 redirect_location = None 11:54:50 11:54:50 if error and self._is_connection_error(error): 11:54:50 # Connect retry? 11:54:50 if connect is False: 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif connect is not None: 11:54:50 connect -= 1 11:54:50 11:54:50 elif error and self._is_read_error(error): 11:54:50 # Read retry? 11:54:50 if read is False or method is None or not self._is_method_retryable(method): 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif read is not None: 11:54:50 read -= 1 11:54:50 11:54:50 elif error: 11:54:50 # Other retry? 11:54:50 if other is not None: 11:54:50 other -= 1 11:54:50 11:54:50 elif response and response.get_redirect_location(): 11:54:50 # Redirect retry? 11:54:50 if redirect is not None: 11:54:50 redirect -= 1 11:54:50 cause = "too many redirects" 11:54:50 response_redirect_location = response.get_redirect_location() 11:54:50 if response_redirect_location: 11:54:50 redirect_location = response_redirect_location 11:54:50 status = response.status 11:54:50 11:54:50 else: 11:54:50 # Incrementing because of a server error like a 500 in 11:54:50 # status_forcelist and the given method is in the allowed_methods 11:54:50 cause = ResponseError.GENERIC_ERROR 11:54:50 if response and response.status: 11:54:50 if status_count is not None: 11:54:50 status_count -= 1 11:54:50 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:54:50 status = response.status 11:54:50 11:54:50 history = self.history + ( 11:54:50 RequestHistory(method, url, error, status, redirect_location), 11:54:50 ) 11:54:50 11:54:50 new_retry = self.new( 11:54:50 total=total, 11:54:50 connect=connect, 11:54:50 read=read, 11:54:50 redirect=redirect, 11:54:50 status=status_count, 11:54:50 other=other, 11:54:50 history=history, 11:54:50 ) 11:54:50 11:54:50 if new_retry.is_exhausted(): 11:54:50 reason = error or ResponseError(cause) 11:54:50 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG3-PP1-TXRX (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 11:54:50 11:54:50 During handling of the above exception, another exception occurred: 11:54:50 11:54:50 self = 11:54:50 11:54:50 def test_06_rdm_portmapping_SRG3_PP1_TXRX(self): 11:54:50 > response = test_utils.get_portmapping_node_attr("ROADMA01", "mapping", "SRG3-PP1-TXRX") 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 11:54:50 transportpce_tests/1.2.1/test01_portmapping.py:91: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 11:54:50 response = get_request(target_url) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 transportpce_tests/common/test_utils.py:117: in get_request 11:54:50 return requests.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 11:54:50 return session.request(method=method, url=url, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:54:50 resp = self.send(prep, **send_kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:54:50 r = adapter.send(request, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 except (ProtocolError, OSError) as err: 11:54:50 raise ConnectionError(err, request=request) 11:54:50 11:54:50 except MaxRetryError as e: 11:54:50 if isinstance(e.reason, ConnectTimeoutError): 11:54:50 # TODO: Remove this in 3.0.0: see #2811 11:54:50 if not isinstance(e.reason, NewConnectionError): 11:54:50 raise ConnectTimeout(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, ResponseError): 11:54:50 raise RetryError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _ProxyError): 11:54:50 raise ProxyError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _SSLError): 11:54:50 # This branch is for urllib3 v1.22 and later. 11:54:50 raise SSLError(e, request=request) 11:54:50 11:54:50 > raise ConnectionError(e, request=request) 11:54:50 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG3-PP1-TXRX (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 11:54:50 ----------------------------- Captured stdout call ----------------------------- 11:54:50 execution of test_06_rdm_portmapping_SRG3_PP1_TXRX 11:54:50 __________ TestTransportPCEPortmapping.test_07_xpdr_device_connection __________ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 > sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:54:50 raise err 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 address = ('localhost', 8191), timeout = 30, source_address = None 11:54:50 socket_options = [(6, 1, 1)] 11:54:50 11:54:50 def create_connection( 11:54:50 address: tuple[str, int], 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 source_address: tuple[str, int] | None = None, 11:54:50 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:54:50 ) -> socket.socket: 11:54:50 """Connect to *address* and return the socket object. 11:54:50 11:54:50 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:54:50 port)``) and return the socket object. Passing the optional 11:54:50 *timeout* parameter will set the timeout on the socket instance 11:54:50 before attempting to connect. If no *timeout* is supplied, the 11:54:50 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:54:50 is used. If *source_address* is set it must be a tuple of (host, port) 11:54:50 for the socket to bind as a source address before making the connection. 11:54:50 An host of '' or port 0 tells the OS to use the default. 11:54:50 """ 11:54:50 11:54:50 host, port = address 11:54:50 if host.startswith("["): 11:54:50 host = host.strip("[]") 11:54:50 err = None 11:54:50 11:54:50 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:54:50 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:54:50 # The original create_connection function always returns all records. 11:54:50 family = allowed_gai_family() 11:54:50 11:54:50 try: 11:54:50 host.encode("idna") 11:54:50 except UnicodeError: 11:54:50 raise LocationParseError(f"'{host}', label empty or too long") from None 11:54:50 11:54:50 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:54:50 af, socktype, proto, canonname, sa = res 11:54:50 sock = None 11:54:50 try: 11:54:50 sock = socket.socket(af, socktype, proto) 11:54:50 11:54:50 # If provided, set socket level options before connecting. 11:54:50 _set_socket_options(sock, socket_options) 11:54:50 11:54:50 if timeout is not _DEFAULT_TIMEOUT: 11:54:50 sock.settimeout(timeout) 11:54:50 if source_address: 11:54:50 sock.bind(source_address) 11:54:50 > sock.connect(sa) 11:54:50 E ConnectionRefusedError: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 method = 'PUT' 11:54:50 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01' 11:54:50 body = '{"node": [{"node-id": "XPDRA01", "netconf-node-topology:netconf-node": {"netconf-node-topology:host": "127.0.0.1", "n...ff-millis": 1800000, "netconf-node-topology:backoff-multiplier": 1.5, "netconf-node-topology:keepalive-delay": 120}}]}' 11:54:50 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '709', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:54:50 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 redirect = False, assert_same_host = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:54:50 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:54:50 decode_content = False, response_kw = {} 11:54:50 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01', query=None, fragment=None) 11:54:50 destination_scheme = None, conn = None, release_this_conn = True 11:54:50 http_tunnel_required = False, err = None, clean_exit = False 11:54:50 11:54:50 def urlopen( # type: ignore[override] 11:54:50 self, 11:54:50 method: str, 11:54:50 url: str, 11:54:50 body: _TYPE_BODY | None = None, 11:54:50 headers: typing.Mapping[str, str] | None = None, 11:54:50 retries: Retry | bool | int | None = None, 11:54:50 redirect: bool = True, 11:54:50 assert_same_host: bool = True, 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 pool_timeout: int | None = None, 11:54:50 release_conn: bool | None = None, 11:54:50 chunked: bool = False, 11:54:50 body_pos: _TYPE_BODY_POSITION | None = None, 11:54:50 preload_content: bool = True, 11:54:50 decode_content: bool = True, 11:54:50 **response_kw: typing.Any, 11:54:50 ) -> BaseHTTPResponse: 11:54:50 """ 11:54:50 Get a connection from the pool and perform an HTTP request. This is the 11:54:50 lowest level call for making a request, so you'll need to specify all 11:54:50 the raw details. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 More commonly, it's appropriate to use a convenience method 11:54:50 such as :meth:`request`. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 `release_conn` will only behave as expected if 11:54:50 `preload_content=False` because we want to make 11:54:50 `preload_content=False` the default behaviour someday soon without 11:54:50 breaking backwards compatibility. 11:54:50 11:54:50 :param method: 11:54:50 HTTP request method (such as GET, POST, PUT, etc.) 11:54:50 11:54:50 :param url: 11:54:50 The URL to perform the request on. 11:54:50 11:54:50 :param body: 11:54:50 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:54:50 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:54:50 11:54:50 :param headers: 11:54:50 Dictionary of custom headers to send, such as User-Agent, 11:54:50 If-None-Match, etc. If None, pool headers are used. If provided, 11:54:50 these headers completely replace any pool-specific headers. 11:54:50 11:54:50 :param retries: 11:54:50 Configure the number of retries to allow before raising a 11:54:50 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:54:50 11:54:50 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:54:50 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:54:50 over different types of retries. 11:54:50 Pass an integer number to retry connection errors that many times, 11:54:50 but no other types of errors. Pass zero to never retry. 11:54:50 11:54:50 If ``False``, then retries are disabled and any exception is raised 11:54:50 immediately. Also, instead of raising a MaxRetryError on redirects, 11:54:50 the redirect response will be returned. 11:54:50 11:54:50 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:54:50 11:54:50 :param redirect: 11:54:50 If True, automatically handle redirects (status codes 301, 302, 11:54:50 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:54:50 will disable redirect, too. 11:54:50 11:54:50 :param assert_same_host: 11:54:50 If ``True``, will make sure that the host of the pool requests is 11:54:50 consistent else will raise HostChangedError. When ``False``, you can 11:54:50 use the pool on an HTTP proxy and request foreign hosts. 11:54:50 11:54:50 :param timeout: 11:54:50 If specified, overrides the default timeout for this one 11:54:50 request. It may be a float (in seconds) or an instance of 11:54:50 :class:`urllib3.util.Timeout`. 11:54:50 11:54:50 :param pool_timeout: 11:54:50 If set and the pool is set to block=True, then this method will 11:54:50 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:54:50 connection is available within the time period. 11:54:50 11:54:50 :param bool preload_content: 11:54:50 If True, the response's body will be preloaded into memory. 11:54:50 11:54:50 :param bool decode_content: 11:54:50 If True, will attempt to decode the body based on the 11:54:50 'content-encoding' header. 11:54:50 11:54:50 :param release_conn: 11:54:50 If False, then the urlopen call will not release the connection 11:54:50 back into the pool once a response is received (but will release if 11:54:50 you read the entire contents of the response such as when 11:54:50 `preload_content=True`). This is useful if you're not preloading 11:54:50 the response's content immediately. You will need to call 11:54:50 ``r.release_conn()`` on the response ``r`` to return the connection 11:54:50 back into the pool. If None, it takes the value of ``preload_content`` 11:54:50 which defaults to ``True``. 11:54:50 11:54:50 :param bool chunked: 11:54:50 If True, urllib3 will send the body using chunked transfer 11:54:50 encoding. Otherwise, urllib3 will send the body using the standard 11:54:50 content-length form. Defaults to False. 11:54:50 11:54:50 :param int body_pos: 11:54:50 Position to seek to in file-like body in the event of a retry or 11:54:50 redirect. Typically this won't need to be set because urllib3 will 11:54:50 auto-populate the value when needed. 11:54:50 """ 11:54:50 parsed_url = parse_url(url) 11:54:50 destination_scheme = parsed_url.scheme 11:54:50 11:54:50 if headers is None: 11:54:50 headers = self.headers 11:54:50 11:54:50 if not isinstance(retries, Retry): 11:54:50 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:54:50 11:54:50 if release_conn is None: 11:54:50 release_conn = preload_content 11:54:50 11:54:50 # Check host 11:54:50 if assert_same_host and not self.is_same_host(url): 11:54:50 raise HostChangedError(self, url, retries) 11:54:50 11:54:50 # Ensure that the URL we're connecting to is properly encoded 11:54:50 if url.startswith("/"): 11:54:50 url = to_str(_encode_target(url)) 11:54:50 else: 11:54:50 url = to_str(parsed_url.url) 11:54:50 11:54:50 conn = None 11:54:50 11:54:50 # Track whether `conn` needs to be released before 11:54:50 # returning/raising/recursing. Update this variable if necessary, and 11:54:50 # leave `release_conn` constant throughout the function. That way, if 11:54:50 # the function recurses, the original value of `release_conn` will be 11:54:50 # passed down into the recursive call, and its value will be respected. 11:54:50 # 11:54:50 # See issue #651 [1] for details. 11:54:50 # 11:54:50 # [1] 11:54:50 release_this_conn = release_conn 11:54:50 11:54:50 http_tunnel_required = connection_requires_http_tunnel( 11:54:50 self.proxy, self.proxy_config, destination_scheme 11:54:50 ) 11:54:50 11:54:50 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:54:50 # have to copy the headers dict so we can safely change it without those 11:54:50 # changes being reflected in anyone else's copy. 11:54:50 if not http_tunnel_required: 11:54:50 headers = headers.copy() # type: ignore[attr-defined] 11:54:50 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:54:50 11:54:50 # Must keep the exception bound to a separate variable or else Python 3 11:54:50 # complains about UnboundLocalError. 11:54:50 err = None 11:54:50 11:54:50 # Keep track of whether we cleanly exited the except block. This 11:54:50 # ensures we do proper cleanup in finally. 11:54:50 clean_exit = False 11:54:50 11:54:50 # Rewind body position, if needed. Record current position 11:54:50 # for future rewinds in the event of a redirect/retry. 11:54:50 body_pos = set_file_position(body, body_pos) 11:54:50 11:54:50 try: 11:54:50 # Request a connection from the queue. 11:54:50 timeout_obj = self._get_timeout(timeout) 11:54:50 conn = self._get_conn(timeout=pool_timeout) 11:54:50 11:54:50 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:54:50 11:54:50 # Is this a closed/new connection that requires CONNECT tunnelling? 11:54:50 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:54:50 try: 11:54:50 self._prepare_proxy(conn) 11:54:50 except (BaseSSLError, OSError, SocketTimeout) as e: 11:54:50 self._raise_timeout( 11:54:50 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:54:50 ) 11:54:50 raise 11:54:50 11:54:50 # If we're going to release the connection in ``finally:``, then 11:54:50 # the response doesn't need to know about the connection. Otherwise 11:54:50 # it will also try to release it and we'll have a double-release 11:54:50 # mess. 11:54:50 response_conn = conn if not release_conn else None 11:54:50 11:54:50 # Make the request on the HTTPConnection object 11:54:50 > response = self._make_request( 11:54:50 conn, 11:54:50 method, 11:54:50 url, 11:54:50 timeout=timeout_obj, 11:54:50 body=body, 11:54:50 headers=headers, 11:54:50 chunked=chunked, 11:54:50 retries=retries, 11:54:50 response_conn=response_conn, 11:54:50 preload_content=preload_content, 11:54:50 decode_content=decode_content, 11:54:50 **response_kw, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:54:50 conn.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 11:54:50 self.endheaders() 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:54:50 self._send_output(message_body, encode_chunked=encode_chunked) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:54:50 self.send(msg) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:54:50 self.connect() 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 11:54:50 self.sock = self._new_conn() 11:54:50 ^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 except socket.gaierror as e: 11:54:50 raise NameResolutionError(self.host, self, e) from e 11:54:50 except SocketTimeout as e: 11:54:50 raise ConnectTimeoutError( 11:54:50 self, 11:54:50 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:54:50 ) from e 11:54:50 11:54:50 except OSError as e: 11:54:50 > raise NewConnectionError( 11:54:50 self, f"Failed to establish a new connection: {e}" 11:54:50 ) from e 11:54:50 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 > resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:54:50 retries = retries.increment( 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 method = 'PUT' 11:54:50 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01' 11:54:50 response = None 11:54:50 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 11:54:50 _pool = 11:54:50 _stacktrace = 11:54:50 11:54:50 def increment( 11:54:50 self, 11:54:50 method: str | None = None, 11:54:50 url: str | None = None, 11:54:50 response: BaseHTTPResponse | None = None, 11:54:50 error: Exception | None = None, 11:54:50 _pool: ConnectionPool | None = None, 11:54:50 _stacktrace: TracebackType | None = None, 11:54:50 ) -> Self: 11:54:50 """Return a new Retry object with incremented retry counters. 11:54:50 11:54:50 :param response: A response object, or None, if the server did not 11:54:50 return a response. 11:54:50 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:54:50 :param Exception error: An error encountered during the request, or 11:54:50 None if the response was received successfully. 11:54:50 11:54:50 :return: A new ``Retry`` object. 11:54:50 """ 11:54:50 if self.total is False and error: 11:54:50 # Disabled, indicate to re-raise the error. 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 11:54:50 total = self.total 11:54:50 if total is not None: 11:54:50 total -= 1 11:54:50 11:54:50 connect = self.connect 11:54:50 read = self.read 11:54:50 redirect = self.redirect 11:54:50 status_count = self.status 11:54:50 other = self.other 11:54:50 cause = "unknown" 11:54:50 status = None 11:54:50 redirect_location = None 11:54:50 11:54:50 if error and self._is_connection_error(error): 11:54:50 # Connect retry? 11:54:50 if connect is False: 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif connect is not None: 11:54:50 connect -= 1 11:54:50 11:54:50 elif error and self._is_read_error(error): 11:54:50 # Read retry? 11:54:50 if read is False or method is None or not self._is_method_retryable(method): 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif read is not None: 11:54:50 read -= 1 11:54:50 11:54:50 elif error: 11:54:50 # Other retry? 11:54:50 if other is not None: 11:54:50 other -= 1 11:54:50 11:54:50 elif response and response.get_redirect_location(): 11:54:50 # Redirect retry? 11:54:50 if redirect is not None: 11:54:50 redirect -= 1 11:54:50 cause = "too many redirects" 11:54:50 response_redirect_location = response.get_redirect_location() 11:54:50 if response_redirect_location: 11:54:50 redirect_location = response_redirect_location 11:54:50 status = response.status 11:54:50 11:54:50 else: 11:54:50 # Incrementing because of a server error like a 500 in 11:54:50 # status_forcelist and the given method is in the allowed_methods 11:54:50 cause = ResponseError.GENERIC_ERROR 11:54:50 if response and response.status: 11:54:50 if status_count is not None: 11:54:50 status_count -= 1 11:54:50 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:54:50 status = response.status 11:54:50 11:54:50 history = self.history + ( 11:54:50 RequestHistory(method, url, error, status, redirect_location), 11:54:50 ) 11:54:50 11:54:50 new_retry = self.new( 11:54:50 total=total, 11:54:50 connect=connect, 11:54:50 read=read, 11:54:50 redirect=redirect, 11:54:50 status=status_count, 11:54:50 other=other, 11:54:50 history=history, 11:54:50 ) 11:54:50 11:54:50 if new_retry.is_exhausted(): 11:54:50 reason = error or ResponseError(cause) 11:54:50 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 11:54:50 11:54:50 During handling of the above exception, another exception occurred: 11:54:50 11:54:50 self = 11:54:50 11:54:50 def test_07_xpdr_device_connection(self): 11:54:50 > response = test_utils.mount_device("XPDRA01", ('xpdra', self.NODE_VERSION)) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 11:54:50 transportpce_tests/1.2.1/test01_portmapping.py:100: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 transportpce_tests/common/test_utils.py:381: in mount_device 11:54:50 response = put_request(url[RESTCONF_VERSION].format('{}', node), body) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 transportpce_tests/common/test_utils.py:125: in put_request 11:54:50 return requests.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 11:54:50 return session.request(method=method, url=url, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:54:50 resp = self.send(prep, **send_kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:54:50 r = adapter.send(request, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 except (ProtocolError, OSError) as err: 11:54:50 raise ConnectionError(err, request=request) 11:54:50 11:54:50 except MaxRetryError as e: 11:54:50 if isinstance(e.reason, ConnectTimeoutError): 11:54:50 # TODO: Remove this in 3.0.0: see #2811 11:54:50 if not isinstance(e.reason, NewConnectionError): 11:54:50 raise ConnectTimeout(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, ResponseError): 11:54:50 raise RetryError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _ProxyError): 11:54:50 raise ProxyError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _SSLError): 11:54:50 # This branch is for urllib3 v1.22 and later. 11:54:50 raise SSLError(e, request=request) 11:54:50 11:54:50 > raise ConnectionError(e, request=request) 11:54:50 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 11:54:50 ----------------------------- Captured stdout call ----------------------------- 11:54:50 execution of test_07_xpdr_device_connection 11:54:50 __________ TestTransportPCEPortmapping.test_08_xpdr_device_connected ___________ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 > sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:54:50 raise err 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 address = ('localhost', 8191), timeout = 30, source_address = None 11:54:50 socket_options = [(6, 1, 1)] 11:54:50 11:54:50 def create_connection( 11:54:50 address: tuple[str, int], 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 source_address: tuple[str, int] | None = None, 11:54:50 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:54:50 ) -> socket.socket: 11:54:50 """Connect to *address* and return the socket object. 11:54:50 11:54:50 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:54:50 port)``) and return the socket object. Passing the optional 11:54:50 *timeout* parameter will set the timeout on the socket instance 11:54:50 before attempting to connect. If no *timeout* is supplied, the 11:54:50 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:54:50 is used. If *source_address* is set it must be a tuple of (host, port) 11:54:50 for the socket to bind as a source address before making the connection. 11:54:50 An host of '' or port 0 tells the OS to use the default. 11:54:50 """ 11:54:50 11:54:50 host, port = address 11:54:50 if host.startswith("["): 11:54:50 host = host.strip("[]") 11:54:50 err = None 11:54:50 11:54:50 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:54:50 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:54:50 # The original create_connection function always returns all records. 11:54:50 family = allowed_gai_family() 11:54:50 11:54:50 try: 11:54:50 host.encode("idna") 11:54:50 except UnicodeError: 11:54:50 raise LocationParseError(f"'{host}', label empty or too long") from None 11:54:50 11:54:50 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:54:50 af, socktype, proto, canonname, sa = res 11:54:50 sock = None 11:54:50 try: 11:54:50 sock = socket.socket(af, socktype, proto) 11:54:50 11:54:50 # If provided, set socket level options before connecting. 11:54:50 _set_socket_options(sock, socket_options) 11:54:50 11:54:50 if timeout is not _DEFAULT_TIMEOUT: 11:54:50 sock.settimeout(timeout) 11:54:50 if source_address: 11:54:50 sock.bind(source_address) 11:54:50 > sock.connect(sa) 11:54:50 E ConnectionRefusedError: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig' 11:54:50 body = None 11:54:50 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:54:50 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 redirect = False, assert_same_host = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:54:50 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:54:50 decode_content = False, response_kw = {} 11:54:50 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01', query='content=nonconfig', fragment=None) 11:54:50 destination_scheme = None, conn = None, release_this_conn = True 11:54:50 http_tunnel_required = False, err = None, clean_exit = False 11:54:50 11:54:50 def urlopen( # type: ignore[override] 11:54:50 self, 11:54:50 method: str, 11:54:50 url: str, 11:54:50 body: _TYPE_BODY | None = None, 11:54:50 headers: typing.Mapping[str, str] | None = None, 11:54:50 retries: Retry | bool | int | None = None, 11:54:50 redirect: bool = True, 11:54:50 assert_same_host: bool = True, 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 pool_timeout: int | None = None, 11:54:50 release_conn: bool | None = None, 11:54:50 chunked: bool = False, 11:54:50 body_pos: _TYPE_BODY_POSITION | None = None, 11:54:50 preload_content: bool = True, 11:54:50 decode_content: bool = True, 11:54:50 **response_kw: typing.Any, 11:54:50 ) -> BaseHTTPResponse: 11:54:50 """ 11:54:50 Get a connection from the pool and perform an HTTP request. This is the 11:54:50 lowest level call for making a request, so you'll need to specify all 11:54:50 the raw details. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 More commonly, it's appropriate to use a convenience method 11:54:50 such as :meth:`request`. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 `release_conn` will only behave as expected if 11:54:50 `preload_content=False` because we want to make 11:54:50 `preload_content=False` the default behaviour someday soon without 11:54:50 breaking backwards compatibility. 11:54:50 11:54:50 :param method: 11:54:50 HTTP request method (such as GET, POST, PUT, etc.) 11:54:50 11:54:50 :param url: 11:54:50 The URL to perform the request on. 11:54:50 11:54:50 :param body: 11:54:50 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:54:50 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:54:50 11:54:50 :param headers: 11:54:50 Dictionary of custom headers to send, such as User-Agent, 11:54:50 If-None-Match, etc. If None, pool headers are used. If provided, 11:54:50 these headers completely replace any pool-specific headers. 11:54:50 11:54:50 :param retries: 11:54:50 Configure the number of retries to allow before raising a 11:54:50 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:54:50 11:54:50 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:54:50 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:54:50 over different types of retries. 11:54:50 Pass an integer number to retry connection errors that many times, 11:54:50 but no other types of errors. Pass zero to never retry. 11:54:50 11:54:50 If ``False``, then retries are disabled and any exception is raised 11:54:50 immediately. Also, instead of raising a MaxRetryError on redirects, 11:54:50 the redirect response will be returned. 11:54:50 11:54:50 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:54:50 11:54:50 :param redirect: 11:54:50 If True, automatically handle redirects (status codes 301, 302, 11:54:50 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:54:50 will disable redirect, too. 11:54:50 11:54:50 :param assert_same_host: 11:54:50 If ``True``, will make sure that the host of the pool requests is 11:54:50 consistent else will raise HostChangedError. When ``False``, you can 11:54:50 use the pool on an HTTP proxy and request foreign hosts. 11:54:50 11:54:50 :param timeout: 11:54:50 If specified, overrides the default timeout for this one 11:54:50 request. It may be a float (in seconds) or an instance of 11:54:50 :class:`urllib3.util.Timeout`. 11:54:50 11:54:50 :param pool_timeout: 11:54:50 If set and the pool is set to block=True, then this method will 11:54:50 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:54:50 connection is available within the time period. 11:54:50 11:54:50 :param bool preload_content: 11:54:50 If True, the response's body will be preloaded into memory. 11:54:50 11:54:50 :param bool decode_content: 11:54:50 If True, will attempt to decode the body based on the 11:54:50 'content-encoding' header. 11:54:50 11:54:50 :param release_conn: 11:54:50 If False, then the urlopen call will not release the connection 11:54:50 back into the pool once a response is received (but will release if 11:54:50 you read the entire contents of the response such as when 11:54:50 `preload_content=True`). This is useful if you're not preloading 11:54:50 the response's content immediately. You will need to call 11:54:50 ``r.release_conn()`` on the response ``r`` to return the connection 11:54:50 back into the pool. If None, it takes the value of ``preload_content`` 11:54:50 which defaults to ``True``. 11:54:50 11:54:50 :param bool chunked: 11:54:50 If True, urllib3 will send the body using chunked transfer 11:54:50 encoding. Otherwise, urllib3 will send the body using the standard 11:54:50 content-length form. Defaults to False. 11:54:50 11:54:50 :param int body_pos: 11:54:50 Position to seek to in file-like body in the event of a retry or 11:54:50 redirect. Typically this won't need to be set because urllib3 will 11:54:50 auto-populate the value when needed. 11:54:50 """ 11:54:50 parsed_url = parse_url(url) 11:54:50 destination_scheme = parsed_url.scheme 11:54:50 11:54:50 if headers is None: 11:54:50 headers = self.headers 11:54:50 11:54:50 if not isinstance(retries, Retry): 11:54:50 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:54:50 11:54:50 if release_conn is None: 11:54:50 release_conn = preload_content 11:54:50 11:54:50 # Check host 11:54:50 if assert_same_host and not self.is_same_host(url): 11:54:50 raise HostChangedError(self, url, retries) 11:54:50 11:54:50 # Ensure that the URL we're connecting to is properly encoded 11:54:50 if url.startswith("/"): 11:54:50 url = to_str(_encode_target(url)) 11:54:50 else: 11:54:50 url = to_str(parsed_url.url) 11:54:50 11:54:50 conn = None 11:54:50 11:54:50 # Track whether `conn` needs to be released before 11:54:50 # returning/raising/recursing. Update this variable if necessary, and 11:54:50 # leave `release_conn` constant throughout the function. That way, if 11:54:50 # the function recurses, the original value of `release_conn` will be 11:54:50 # passed down into the recursive call, and its value will be respected. 11:54:50 # 11:54:50 # See issue #651 [1] for details. 11:54:50 # 11:54:50 # [1] 11:54:50 release_this_conn = release_conn 11:54:50 11:54:50 http_tunnel_required = connection_requires_http_tunnel( 11:54:50 self.proxy, self.proxy_config, destination_scheme 11:54:50 ) 11:54:50 11:54:50 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:54:50 # have to copy the headers dict so we can safely change it without those 11:54:50 # changes being reflected in anyone else's copy. 11:54:50 if not http_tunnel_required: 11:54:50 headers = headers.copy() # type: ignore[attr-defined] 11:54:50 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:54:50 11:54:50 # Must keep the exception bound to a separate variable or else Python 3 11:54:50 # complains about UnboundLocalError. 11:54:50 err = None 11:54:50 11:54:50 # Keep track of whether we cleanly exited the except block. This 11:54:50 # ensures we do proper cleanup in finally. 11:54:50 clean_exit = False 11:54:50 11:54:50 # Rewind body position, if needed. Record current position 11:54:50 # for future rewinds in the event of a redirect/retry. 11:54:50 body_pos = set_file_position(body, body_pos) 11:54:50 11:54:50 try: 11:54:50 # Request a connection from the queue. 11:54:50 timeout_obj = self._get_timeout(timeout) 11:54:50 conn = self._get_conn(timeout=pool_timeout) 11:54:50 11:54:50 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:54:50 11:54:50 # Is this a closed/new connection that requires CONNECT tunnelling? 11:54:50 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:54:50 try: 11:54:50 self._prepare_proxy(conn) 11:54:50 except (BaseSSLError, OSError, SocketTimeout) as e: 11:54:50 self._raise_timeout( 11:54:50 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:54:50 ) 11:54:50 raise 11:54:50 11:54:50 # If we're going to release the connection in ``finally:``, then 11:54:50 # the response doesn't need to know about the connection. Otherwise 11:54:50 # it will also try to release it and we'll have a double-release 11:54:50 # mess. 11:54:50 response_conn = conn if not release_conn else None 11:54:50 11:54:50 # Make the request on the HTTPConnection object 11:54:50 > response = self._make_request( 11:54:50 conn, 11:54:50 method, 11:54:50 url, 11:54:50 timeout=timeout_obj, 11:54:50 body=body, 11:54:50 headers=headers, 11:54:50 chunked=chunked, 11:54:50 retries=retries, 11:54:50 response_conn=response_conn, 11:54:50 preload_content=preload_content, 11:54:50 decode_content=decode_content, 11:54:50 **response_kw, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:54:50 conn.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 11:54:50 self.endheaders() 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:54:50 self._send_output(message_body, encode_chunked=encode_chunked) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:54:50 self.send(msg) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:54:50 self.connect() 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 11:54:50 self.sock = self._new_conn() 11:54:50 ^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 except socket.gaierror as e: 11:54:50 raise NameResolutionError(self.host, self, e) from e 11:54:50 except SocketTimeout as e: 11:54:50 raise ConnectTimeoutError( 11:54:50 self, 11:54:50 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:54:50 ) from e 11:54:50 11:54:50 except OSError as e: 11:54:50 > raise NewConnectionError( 11:54:50 self, f"Failed to establish a new connection: {e}" 11:54:50 ) from e 11:54:50 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 > resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:54:50 retries = retries.increment( 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig' 11:54:50 response = None 11:54:50 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 11:54:50 _pool = 11:54:50 _stacktrace = 11:54:50 11:54:50 def increment( 11:54:50 self, 11:54:50 method: str | None = None, 11:54:50 url: str | None = None, 11:54:50 response: BaseHTTPResponse | None = None, 11:54:50 error: Exception | None = None, 11:54:50 _pool: ConnectionPool | None = None, 11:54:50 _stacktrace: TracebackType | None = None, 11:54:50 ) -> Self: 11:54:50 """Return a new Retry object with incremented retry counters. 11:54:50 11:54:50 :param response: A response object, or None, if the server did not 11:54:50 return a response. 11:54:50 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:54:50 :param Exception error: An error encountered during the request, or 11:54:50 None if the response was received successfully. 11:54:50 11:54:50 :return: A new ``Retry`` object. 11:54:50 """ 11:54:50 if self.total is False and error: 11:54:50 # Disabled, indicate to re-raise the error. 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 11:54:50 total = self.total 11:54:50 if total is not None: 11:54:50 total -= 1 11:54:50 11:54:50 connect = self.connect 11:54:50 read = self.read 11:54:50 redirect = self.redirect 11:54:50 status_count = self.status 11:54:50 other = self.other 11:54:50 cause = "unknown" 11:54:50 status = None 11:54:50 redirect_location = None 11:54:50 11:54:50 if error and self._is_connection_error(error): 11:54:50 # Connect retry? 11:54:50 if connect is False: 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif connect is not None: 11:54:50 connect -= 1 11:54:50 11:54:50 elif error and self._is_read_error(error): 11:54:50 # Read retry? 11:54:50 if read is False or method is None or not self._is_method_retryable(method): 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif read is not None: 11:54:50 read -= 1 11:54:50 11:54:50 elif error: 11:54:50 # Other retry? 11:54:50 if other is not None: 11:54:50 other -= 1 11:54:50 11:54:50 elif response and response.get_redirect_location(): 11:54:50 # Redirect retry? 11:54:50 if redirect is not None: 11:54:50 redirect -= 1 11:54:50 cause = "too many redirects" 11:54:50 response_redirect_location = response.get_redirect_location() 11:54:50 if response_redirect_location: 11:54:50 redirect_location = response_redirect_location 11:54:50 status = response.status 11:54:50 11:54:50 else: 11:54:50 # Incrementing because of a server error like a 500 in 11:54:50 # status_forcelist and the given method is in the allowed_methods 11:54:50 cause = ResponseError.GENERIC_ERROR 11:54:50 if response and response.status: 11:54:50 if status_count is not None: 11:54:50 status_count -= 1 11:54:50 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:54:50 status = response.status 11:54:50 11:54:50 history = self.history + ( 11:54:50 RequestHistory(method, url, error, status, redirect_location), 11:54:50 ) 11:54:50 11:54:50 new_retry = self.new( 11:54:50 total=total, 11:54:50 connect=connect, 11:54:50 read=read, 11:54:50 redirect=redirect, 11:54:50 status=status_count, 11:54:50 other=other, 11:54:50 history=history, 11:54:50 ) 11:54:50 11:54:50 if new_retry.is_exhausted(): 11:54:50 reason = error or ResponseError(cause) 11:54:50 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 11:54:50 11:54:50 During handling of the above exception, another exception occurred: 11:54:50 11:54:50 self = 11:54:50 11:54:50 def test_08_xpdr_device_connected(self): 11:54:50 > response = test_utils.check_device_connection("XPDRA01") 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 11:54:50 transportpce_tests/1.2.1/test01_portmapping.py:104: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 transportpce_tests/common/test_utils.py:409: in check_device_connection 11:54:50 response = get_request(url[RESTCONF_VERSION].format('{}', node)) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 transportpce_tests/common/test_utils.py:117: in get_request 11:54:50 return requests.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 11:54:50 return session.request(method=method, url=url, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:54:50 resp = self.send(prep, **send_kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:54:50 r = adapter.send(request, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 except (ProtocolError, OSError) as err: 11:54:50 raise ConnectionError(err, request=request) 11:54:50 11:54:50 except MaxRetryError as e: 11:54:50 if isinstance(e.reason, ConnectTimeoutError): 11:54:50 # TODO: Remove this in 3.0.0: see #2811 11:54:50 if not isinstance(e.reason, NewConnectionError): 11:54:50 raise ConnectTimeout(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, ResponseError): 11:54:50 raise RetryError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _ProxyError): 11:54:50 raise ProxyError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _SSLError): 11:54:50 # This branch is for urllib3 v1.22 and later. 11:54:50 raise SSLError(e, request=request) 11:54:50 11:54:50 > raise ConnectionError(e, request=request) 11:54:50 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 11:54:50 ----------------------------- Captured stdout call ----------------------------- 11:54:50 execution of test_08_xpdr_device_connected 11:54:50 __________ TestTransportPCEPortmapping.test_09_xpdr_portmapping_info ___________ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 > sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:54:50 raise err 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 address = ('localhost', 8191), timeout = 30, source_address = None 11:54:50 socket_options = [(6, 1, 1)] 11:54:50 11:54:50 def create_connection( 11:54:50 address: tuple[str, int], 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 source_address: tuple[str, int] | None = None, 11:54:50 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:54:50 ) -> socket.socket: 11:54:50 """Connect to *address* and return the socket object. 11:54:50 11:54:50 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:54:50 port)``) and return the socket object. Passing the optional 11:54:50 *timeout* parameter will set the timeout on the socket instance 11:54:50 before attempting to connect. If no *timeout* is supplied, the 11:54:50 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:54:50 is used. If *source_address* is set it must be a tuple of (host, port) 11:54:50 for the socket to bind as a source address before making the connection. 11:54:50 An host of '' or port 0 tells the OS to use the default. 11:54:50 """ 11:54:50 11:54:50 host, port = address 11:54:50 if host.startswith("["): 11:54:50 host = host.strip("[]") 11:54:50 err = None 11:54:50 11:54:50 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:54:50 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:54:50 # The original create_connection function always returns all records. 11:54:50 family = allowed_gai_family() 11:54:50 11:54:50 try: 11:54:50 host.encode("idna") 11:54:50 except UnicodeError: 11:54:50 raise LocationParseError(f"'{host}', label empty or too long") from None 11:54:50 11:54:50 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:54:50 af, socktype, proto, canonname, sa = res 11:54:50 sock = None 11:54:50 try: 11:54:50 sock = socket.socket(af, socktype, proto) 11:54:50 11:54:50 # If provided, set socket level options before connecting. 11:54:50 _set_socket_options(sock, socket_options) 11:54:50 11:54:50 if timeout is not _DEFAULT_TIMEOUT: 11:54:50 sock.settimeout(timeout) 11:54:50 if source_address: 11:54:50 sock.bind(source_address) 11:54:50 > sock.connect(sa) 11:54:50 E ConnectionRefusedError: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info' 11:54:50 body = None 11:54:50 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:54:50 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 redirect = False, assert_same_host = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:54:50 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:54:50 decode_content = False, response_kw = {} 11:54:50 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info', query=None, fragment=None) 11:54:50 destination_scheme = None, conn = None, release_this_conn = True 11:54:50 http_tunnel_required = False, err = None, clean_exit = False 11:54:50 11:54:50 def urlopen( # type: ignore[override] 11:54:50 self, 11:54:50 method: str, 11:54:50 url: str, 11:54:50 body: _TYPE_BODY | None = None, 11:54:50 headers: typing.Mapping[str, str] | None = None, 11:54:50 retries: Retry | bool | int | None = None, 11:54:50 redirect: bool = True, 11:54:50 assert_same_host: bool = True, 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 pool_timeout: int | None = None, 11:54:50 release_conn: bool | None = None, 11:54:50 chunked: bool = False, 11:54:50 body_pos: _TYPE_BODY_POSITION | None = None, 11:54:50 preload_content: bool = True, 11:54:50 decode_content: bool = True, 11:54:50 **response_kw: typing.Any, 11:54:50 ) -> BaseHTTPResponse: 11:54:50 """ 11:54:50 Get a connection from the pool and perform an HTTP request. This is the 11:54:50 lowest level call for making a request, so you'll need to specify all 11:54:50 the raw details. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 More commonly, it's appropriate to use a convenience method 11:54:50 such as :meth:`request`. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 `release_conn` will only behave as expected if 11:54:50 `preload_content=False` because we want to make 11:54:50 `preload_content=False` the default behaviour someday soon without 11:54:50 breaking backwards compatibility. 11:54:50 11:54:50 :param method: 11:54:50 HTTP request method (such as GET, POST, PUT, etc.) 11:54:50 11:54:50 :param url: 11:54:50 The URL to perform the request on. 11:54:50 11:54:50 :param body: 11:54:50 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:54:50 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:54:50 11:54:50 :param headers: 11:54:50 Dictionary of custom headers to send, such as User-Agent, 11:54:50 If-None-Match, etc. If None, pool headers are used. If provided, 11:54:50 these headers completely replace any pool-specific headers. 11:54:50 11:54:50 :param retries: 11:54:50 Configure the number of retries to allow before raising a 11:54:50 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:54:50 11:54:50 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:54:50 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:54:50 over different types of retries. 11:54:50 Pass an integer number to retry connection errors that many times, 11:54:50 but no other types of errors. Pass zero to never retry. 11:54:50 11:54:50 If ``False``, then retries are disabled and any exception is raised 11:54:50 immediately. Also, instead of raising a MaxRetryError on redirects, 11:54:50 the redirect response will be returned. 11:54:50 11:54:50 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:54:50 11:54:50 :param redirect: 11:54:50 If True, automatically handle redirects (status codes 301, 302, 11:54:50 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:54:50 will disable redirect, too. 11:54:50 11:54:50 :param assert_same_host: 11:54:50 If ``True``, will make sure that the host of the pool requests is 11:54:50 consistent else will raise HostChangedError. When ``False``, you can 11:54:50 use the pool on an HTTP proxy and request foreign hosts. 11:54:50 11:54:50 :param timeout: 11:54:50 If specified, overrides the default timeout for this one 11:54:50 request. It may be a float (in seconds) or an instance of 11:54:50 :class:`urllib3.util.Timeout`. 11:54:50 11:54:50 :param pool_timeout: 11:54:50 If set and the pool is set to block=True, then this method will 11:54:50 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:54:50 connection is available within the time period. 11:54:50 11:54:50 :param bool preload_content: 11:54:50 If True, the response's body will be preloaded into memory. 11:54:50 11:54:50 :param bool decode_content: 11:54:50 If True, will attempt to decode the body based on the 11:54:50 'content-encoding' header. 11:54:50 11:54:50 :param release_conn: 11:54:50 If False, then the urlopen call will not release the connection 11:54:50 back into the pool once a response is received (but will release if 11:54:50 you read the entire contents of the response such as when 11:54:50 `preload_content=True`). This is useful if you're not preloading 11:54:50 the response's content immediately. You will need to call 11:54:50 ``r.release_conn()`` on the response ``r`` to return the connection 11:54:50 back into the pool. If None, it takes the value of ``preload_content`` 11:54:50 which defaults to ``True``. 11:54:50 11:54:50 :param bool chunked: 11:54:50 If True, urllib3 will send the body using chunked transfer 11:54:50 encoding. Otherwise, urllib3 will send the body using the standard 11:54:50 content-length form. Defaults to False. 11:54:50 11:54:50 :param int body_pos: 11:54:50 Position to seek to in file-like body in the event of a retry or 11:54:50 redirect. Typically this won't need to be set because urllib3 will 11:54:50 auto-populate the value when needed. 11:54:50 """ 11:54:50 parsed_url = parse_url(url) 11:54:50 destination_scheme = parsed_url.scheme 11:54:50 11:54:50 if headers is None: 11:54:50 headers = self.headers 11:54:50 11:54:50 if not isinstance(retries, Retry): 11:54:50 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:54:50 11:54:50 if release_conn is None: 11:54:50 release_conn = preload_content 11:54:50 11:54:50 # Check host 11:54:50 if assert_same_host and not self.is_same_host(url): 11:54:50 raise HostChangedError(self, url, retries) 11:54:50 11:54:50 # Ensure that the URL we're connecting to is properly encoded 11:54:50 if url.startswith("/"): 11:54:50 url = to_str(_encode_target(url)) 11:54:50 else: 11:54:50 url = to_str(parsed_url.url) 11:54:50 11:54:50 conn = None 11:54:50 11:54:50 # Track whether `conn` needs to be released before 11:54:50 # returning/raising/recursing. Update this variable if necessary, and 11:54:50 # leave `release_conn` constant throughout the function. That way, if 11:54:50 # the function recurses, the original value of `release_conn` will be 11:54:50 # passed down into the recursive call, and its value will be respected. 11:54:50 # 11:54:50 # See issue #651 [1] for details. 11:54:50 # 11:54:50 # [1] 11:54:50 release_this_conn = release_conn 11:54:50 11:54:50 http_tunnel_required = connection_requires_http_tunnel( 11:54:50 self.proxy, self.proxy_config, destination_scheme 11:54:50 ) 11:54:50 11:54:50 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:54:50 # have to copy the headers dict so we can safely change it without those 11:54:50 # changes being reflected in anyone else's copy. 11:54:50 if not http_tunnel_required: 11:54:50 headers = headers.copy() # type: ignore[attr-defined] 11:54:50 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:54:50 11:54:50 # Must keep the exception bound to a separate variable or else Python 3 11:54:50 # complains about UnboundLocalError. 11:54:50 err = None 11:54:50 11:54:50 # Keep track of whether we cleanly exited the except block. This 11:54:50 # ensures we do proper cleanup in finally. 11:54:50 clean_exit = False 11:54:50 11:54:50 # Rewind body position, if needed. Record current position 11:54:50 # for future rewinds in the event of a redirect/retry. 11:54:50 body_pos = set_file_position(body, body_pos) 11:54:50 11:54:50 try: 11:54:50 # Request a connection from the queue. 11:54:50 timeout_obj = self._get_timeout(timeout) 11:54:50 conn = self._get_conn(timeout=pool_timeout) 11:54:50 11:54:50 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:54:50 11:54:50 # Is this a closed/new connection that requires CONNECT tunnelling? 11:54:50 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:54:50 try: 11:54:50 self._prepare_proxy(conn) 11:54:50 except (BaseSSLError, OSError, SocketTimeout) as e: 11:54:50 self._raise_timeout( 11:54:50 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:54:50 ) 11:54:50 raise 11:54:50 11:54:50 # If we're going to release the connection in ``finally:``, then 11:54:50 # the response doesn't need to know about the connection. Otherwise 11:54:50 # it will also try to release it and we'll have a double-release 11:54:50 # mess. 11:54:50 response_conn = conn if not release_conn else None 11:54:50 11:54:50 # Make the request on the HTTPConnection object 11:54:50 > response = self._make_request( 11:54:50 conn, 11:54:50 method, 11:54:50 url, 11:54:50 timeout=timeout_obj, 11:54:50 body=body, 11:54:50 headers=headers, 11:54:50 chunked=chunked, 11:54:50 retries=retries, 11:54:50 response_conn=response_conn, 11:54:50 preload_content=preload_content, 11:54:50 decode_content=decode_content, 11:54:50 **response_kw, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:54:50 conn.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 11:54:50 self.endheaders() 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:54:50 self._send_output(message_body, encode_chunked=encode_chunked) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:54:50 self.send(msg) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:54:50 self.connect() 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 11:54:50 self.sock = self._new_conn() 11:54:50 ^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 except socket.gaierror as e: 11:54:50 raise NameResolutionError(self.host, self, e) from e 11:54:50 except SocketTimeout as e: 11:54:50 raise ConnectTimeoutError( 11:54:50 self, 11:54:50 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:54:50 ) from e 11:54:50 11:54:50 except OSError as e: 11:54:50 > raise NewConnectionError( 11:54:50 self, f"Failed to establish a new connection: {e}" 11:54:50 ) from e 11:54:50 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 > resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:54:50 retries = retries.increment( 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info' 11:54:50 response = None 11:54:50 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 11:54:50 _pool = 11:54:50 _stacktrace = 11:54:50 11:54:50 def increment( 11:54:50 self, 11:54:50 method: str | None = None, 11:54:50 url: str | None = None, 11:54:50 response: BaseHTTPResponse | None = None, 11:54:50 error: Exception | None = None, 11:54:50 _pool: ConnectionPool | None = None, 11:54:50 _stacktrace: TracebackType | None = None, 11:54:50 ) -> Self: 11:54:50 """Return a new Retry object with incremented retry counters. 11:54:50 11:54:50 :param response: A response object, or None, if the server did not 11:54:50 return a response. 11:54:50 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:54:50 :param Exception error: An error encountered during the request, or 11:54:50 None if the response was received successfully. 11:54:50 11:54:50 :return: A new ``Retry`` object. 11:54:50 """ 11:54:50 if self.total is False and error: 11:54:50 # Disabled, indicate to re-raise the error. 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 11:54:50 total = self.total 11:54:50 if total is not None: 11:54:50 total -= 1 11:54:50 11:54:50 connect = self.connect 11:54:50 read = self.read 11:54:50 redirect = self.redirect 11:54:50 status_count = self.status 11:54:50 other = self.other 11:54:50 cause = "unknown" 11:54:50 status = None 11:54:50 redirect_location = None 11:54:50 11:54:50 if error and self._is_connection_error(error): 11:54:50 # Connect retry? 11:54:50 if connect is False: 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif connect is not None: 11:54:50 connect -= 1 11:54:50 11:54:50 elif error and self._is_read_error(error): 11:54:50 # Read retry? 11:54:50 if read is False or method is None or not self._is_method_retryable(method): 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif read is not None: 11:54:50 read -= 1 11:54:50 11:54:50 elif error: 11:54:50 # Other retry? 11:54:50 if other is not None: 11:54:50 other -= 1 11:54:50 11:54:50 elif response and response.get_redirect_location(): 11:54:50 # Redirect retry? 11:54:50 if redirect is not None: 11:54:50 redirect -= 1 11:54:50 cause = "too many redirects" 11:54:50 response_redirect_location = response.get_redirect_location() 11:54:50 if response_redirect_location: 11:54:50 redirect_location = response_redirect_location 11:54:50 status = response.status 11:54:50 11:54:50 else: 11:54:50 # Incrementing because of a server error like a 500 in 11:54:50 # status_forcelist and the given method is in the allowed_methods 11:54:50 cause = ResponseError.GENERIC_ERROR 11:54:50 if response and response.status: 11:54:50 if status_count is not None: 11:54:50 status_count -= 1 11:54:50 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:54:50 status = response.status 11:54:50 11:54:50 history = self.history + ( 11:54:50 RequestHistory(method, url, error, status, redirect_location), 11:54:50 ) 11:54:50 11:54:50 new_retry = self.new( 11:54:50 total=total, 11:54:50 connect=connect, 11:54:50 read=read, 11:54:50 redirect=redirect, 11:54:50 status=status_count, 11:54:50 other=other, 11:54:50 history=history, 11:54:50 ) 11:54:50 11:54:50 if new_retry.is_exhausted(): 11:54:50 reason = error or ResponseError(cause) 11:54:50 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 11:54:50 11:54:50 During handling of the above exception, another exception occurred: 11:54:50 11:54:50 self = 11:54:50 11:54:50 def test_09_xpdr_portmapping_info(self): 11:54:50 > response = test_utils.get_portmapping_node_attr("XPDRA01", "node-info", None) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 11:54:50 transportpce_tests/1.2.1/test01_portmapping.py:110: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 11:54:50 response = get_request(target_url) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 transportpce_tests/common/test_utils.py:117: in get_request 11:54:50 return requests.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 11:54:50 return session.request(method=method, url=url, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:54:50 resp = self.send(prep, **send_kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:54:50 r = adapter.send(request, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 except (ProtocolError, OSError) as err: 11:54:50 raise ConnectionError(err, request=request) 11:54:50 11:54:50 except MaxRetryError as e: 11:54:50 if isinstance(e.reason, ConnectTimeoutError): 11:54:50 # TODO: Remove this in 3.0.0: see #2811 11:54:50 if not isinstance(e.reason, NewConnectionError): 11:54:50 raise ConnectTimeout(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, ResponseError): 11:54:50 raise RetryError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _ProxyError): 11:54:50 raise ProxyError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _SSLError): 11:54:50 # This branch is for urllib3 v1.22 and later. 11:54:50 raise SSLError(e, request=request) 11:54:50 11:54:50 > raise ConnectionError(e, request=request) 11:54:50 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 11:54:50 ----------------------------- Captured stdout call ----------------------------- 11:54:50 execution of test_09_xpdr_portmapping_info 11:54:50 ________ TestTransportPCEPortmapping.test_10_xpdr_portmapping_NETWORK1 _________ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 > sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:54:50 raise err 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 address = ('localhost', 8191), timeout = 30, source_address = None 11:54:50 socket_options = [(6, 1, 1)] 11:54:50 11:54:50 def create_connection( 11:54:50 address: tuple[str, int], 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 source_address: tuple[str, int] | None = None, 11:54:50 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:54:50 ) -> socket.socket: 11:54:50 """Connect to *address* and return the socket object. 11:54:50 11:54:50 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:54:50 port)``) and return the socket object. Passing the optional 11:54:50 *timeout* parameter will set the timeout on the socket instance 11:54:50 before attempting to connect. If no *timeout* is supplied, the 11:54:50 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:54:50 is used. If *source_address* is set it must be a tuple of (host, port) 11:54:50 for the socket to bind as a source address before making the connection. 11:54:50 An host of '' or port 0 tells the OS to use the default. 11:54:50 """ 11:54:50 11:54:50 host, port = address 11:54:50 if host.startswith("["): 11:54:50 host = host.strip("[]") 11:54:50 err = None 11:54:50 11:54:50 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:54:50 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:54:50 # The original create_connection function always returns all records. 11:54:50 family = allowed_gai_family() 11:54:50 11:54:50 try: 11:54:50 host.encode("idna") 11:54:50 except UnicodeError: 11:54:50 raise LocationParseError(f"'{host}', label empty or too long") from None 11:54:50 11:54:50 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:54:50 af, socktype, proto, canonname, sa = res 11:54:50 sock = None 11:54:50 try: 11:54:50 sock = socket.socket(af, socktype, proto) 11:54:50 11:54:50 # If provided, set socket level options before connecting. 11:54:50 _set_socket_options(sock, socket_options) 11:54:50 11:54:50 if timeout is not _DEFAULT_TIMEOUT: 11:54:50 sock.settimeout(timeout) 11:54:50 if source_address: 11:54:50 sock.bind(source_address) 11:54:50 > sock.connect(sa) 11:54:50 E ConnectionRefusedError: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK1' 11:54:50 body = None 11:54:50 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:54:50 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 redirect = False, assert_same_host = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:54:50 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:54:50 decode_content = False, response_kw = {} 11:54:50 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK1', query=None, fragment=None) 11:54:50 destination_scheme = None, conn = None, release_this_conn = True 11:54:50 http_tunnel_required = False, err = None, clean_exit = False 11:54:50 11:54:50 def urlopen( # type: ignore[override] 11:54:50 self, 11:54:50 method: str, 11:54:50 url: str, 11:54:50 body: _TYPE_BODY | None = None, 11:54:50 headers: typing.Mapping[str, str] | None = None, 11:54:50 retries: Retry | bool | int | None = None, 11:54:50 redirect: bool = True, 11:54:50 assert_same_host: bool = True, 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 pool_timeout: int | None = None, 11:54:50 release_conn: bool | None = None, 11:54:50 chunked: bool = False, 11:54:50 body_pos: _TYPE_BODY_POSITION | None = None, 11:54:50 preload_content: bool = True, 11:54:50 decode_content: bool = True, 11:54:50 **response_kw: typing.Any, 11:54:50 ) -> BaseHTTPResponse: 11:54:50 """ 11:54:50 Get a connection from the pool and perform an HTTP request. This is the 11:54:50 lowest level call for making a request, so you'll need to specify all 11:54:50 the raw details. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 More commonly, it's appropriate to use a convenience method 11:54:50 such as :meth:`request`. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 `release_conn` will only behave as expected if 11:54:50 `preload_content=False` because we want to make 11:54:50 `preload_content=False` the default behaviour someday soon without 11:54:50 breaking backwards compatibility. 11:54:50 11:54:50 :param method: 11:54:50 HTTP request method (such as GET, POST, PUT, etc.) 11:54:50 11:54:50 :param url: 11:54:50 The URL to perform the request on. 11:54:50 11:54:50 :param body: 11:54:50 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:54:50 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:54:50 11:54:50 :param headers: 11:54:50 Dictionary of custom headers to send, such as User-Agent, 11:54:50 If-None-Match, etc. If None, pool headers are used. If provided, 11:54:50 these headers completely replace any pool-specific headers. 11:54:50 11:54:50 :param retries: 11:54:50 Configure the number of retries to allow before raising a 11:54:50 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:54:50 11:54:50 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:54:50 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:54:50 over different types of retries. 11:54:50 Pass an integer number to retry connection errors that many times, 11:54:50 but no other types of errors. Pass zero to never retry. 11:54:50 11:54:50 If ``False``, then retries are disabled and any exception is raised 11:54:50 immediately. Also, instead of raising a MaxRetryError on redirects, 11:54:50 the redirect response will be returned. 11:54:50 11:54:50 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:54:50 11:54:50 :param redirect: 11:54:50 If True, automatically handle redirects (status codes 301, 302, 11:54:50 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:54:50 will disable redirect, too. 11:54:50 11:54:50 :param assert_same_host: 11:54:50 If ``True``, will make sure that the host of the pool requests is 11:54:50 consistent else will raise HostChangedError. When ``False``, you can 11:54:50 use the pool on an HTTP proxy and request foreign hosts. 11:54:50 11:54:50 :param timeout: 11:54:50 If specified, overrides the default timeout for this one 11:54:50 request. It may be a float (in seconds) or an instance of 11:54:50 :class:`urllib3.util.Timeout`. 11:54:50 11:54:50 :param pool_timeout: 11:54:50 If set and the pool is set to block=True, then this method will 11:54:50 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:54:50 connection is available within the time period. 11:54:50 11:54:50 :param bool preload_content: 11:54:50 If True, the response's body will be preloaded into memory. 11:54:50 11:54:50 :param bool decode_content: 11:54:50 If True, will attempt to decode the body based on the 11:54:50 'content-encoding' header. 11:54:50 11:54:50 :param release_conn: 11:54:50 If False, then the urlopen call will not release the connection 11:54:50 back into the pool once a response is received (but will release if 11:54:50 you read the entire contents of the response such as when 11:54:50 `preload_content=True`). This is useful if you're not preloading 11:54:50 the response's content immediately. You will need to call 11:54:50 ``r.release_conn()`` on the response ``r`` to return the connection 11:54:50 back into the pool. If None, it takes the value of ``preload_content`` 11:54:50 which defaults to ``True``. 11:54:50 11:54:50 :param bool chunked: 11:54:50 If True, urllib3 will send the body using chunked transfer 11:54:50 encoding. Otherwise, urllib3 will send the body using the standard 11:54:50 content-length form. Defaults to False. 11:54:50 11:54:50 :param int body_pos: 11:54:50 Position to seek to in file-like body in the event of a retry or 11:54:50 redirect. Typically this won't need to be set because urllib3 will 11:54:50 auto-populate the value when needed. 11:54:50 """ 11:54:50 parsed_url = parse_url(url) 11:54:50 destination_scheme = parsed_url.scheme 11:54:50 11:54:50 if headers is None: 11:54:50 headers = self.headers 11:54:50 11:54:50 if not isinstance(retries, Retry): 11:54:50 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:54:50 11:54:50 if release_conn is None: 11:54:50 release_conn = preload_content 11:54:50 11:54:50 # Check host 11:54:50 if assert_same_host and not self.is_same_host(url): 11:54:50 raise HostChangedError(self, url, retries) 11:54:50 11:54:50 # Ensure that the URL we're connecting to is properly encoded 11:54:50 if url.startswith("/"): 11:54:50 url = to_str(_encode_target(url)) 11:54:50 else: 11:54:50 url = to_str(parsed_url.url) 11:54:50 11:54:50 conn = None 11:54:50 11:54:50 # Track whether `conn` needs to be released before 11:54:50 # returning/raising/recursing. Update this variable if necessary, and 11:54:50 # leave `release_conn` constant throughout the function. That way, if 11:54:50 # the function recurses, the original value of `release_conn` will be 11:54:50 # passed down into the recursive call, and its value will be respected. 11:54:50 # 11:54:50 # See issue #651 [1] for details. 11:54:50 # 11:54:50 # [1] 11:54:50 release_this_conn = release_conn 11:54:50 11:54:50 http_tunnel_required = connection_requires_http_tunnel( 11:54:50 self.proxy, self.proxy_config, destination_scheme 11:54:50 ) 11:54:50 11:54:50 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:54:50 # have to copy the headers dict so we can safely change it without those 11:54:50 # changes being reflected in anyone else's copy. 11:54:50 if not http_tunnel_required: 11:54:50 headers = headers.copy() # type: ignore[attr-defined] 11:54:50 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:54:50 11:54:50 # Must keep the exception bound to a separate variable or else Python 3 11:54:50 # complains about UnboundLocalError. 11:54:50 err = None 11:54:50 11:54:50 # Keep track of whether we cleanly exited the except block. This 11:54:50 # ensures we do proper cleanup in finally. 11:54:50 clean_exit = False 11:54:50 11:54:50 # Rewind body position, if needed. Record current position 11:54:50 # for future rewinds in the event of a redirect/retry. 11:54:50 body_pos = set_file_position(body, body_pos) 11:54:50 11:54:50 try: 11:54:50 # Request a connection from the queue. 11:54:50 timeout_obj = self._get_timeout(timeout) 11:54:50 conn = self._get_conn(timeout=pool_timeout) 11:54:50 11:54:50 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:54:50 11:54:50 # Is this a closed/new connection that requires CONNECT tunnelling? 11:54:50 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:54:50 try: 11:54:50 self._prepare_proxy(conn) 11:54:50 except (BaseSSLError, OSError, SocketTimeout) as e: 11:54:50 self._raise_timeout( 11:54:50 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:54:50 ) 11:54:50 raise 11:54:50 11:54:50 # If we're going to release the connection in ``finally:``, then 11:54:50 # the response doesn't need to know about the connection. Otherwise 11:54:50 # it will also try to release it and we'll have a double-release 11:54:50 # mess. 11:54:50 response_conn = conn if not release_conn else None 11:54:50 11:54:50 # Make the request on the HTTPConnection object 11:54:50 > response = self._make_request( 11:54:50 conn, 11:54:50 method, 11:54:50 url, 11:54:50 timeout=timeout_obj, 11:54:50 body=body, 11:54:50 headers=headers, 11:54:50 chunked=chunked, 11:54:50 retries=retries, 11:54:50 response_conn=response_conn, 11:54:50 preload_content=preload_content, 11:54:50 decode_content=decode_content, 11:54:50 **response_kw, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:54:50 conn.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 11:54:50 self.endheaders() 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:54:50 self._send_output(message_body, encode_chunked=encode_chunked) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:54:50 self.send(msg) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:54:50 self.connect() 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 11:54:50 self.sock = self._new_conn() 11:54:50 ^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 except socket.gaierror as e: 11:54:50 raise NameResolutionError(self.host, self, e) from e 11:54:50 except SocketTimeout as e: 11:54:50 raise ConnectTimeoutError( 11:54:50 self, 11:54:50 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:54:50 ) from e 11:54:50 11:54:50 except OSError as e: 11:54:50 > raise NewConnectionError( 11:54:50 self, f"Failed to establish a new connection: {e}" 11:54:50 ) from e 11:54:50 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 > resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:54:50 retries = retries.increment( 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK1' 11:54:50 response = None 11:54:50 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 11:54:50 _pool = 11:54:50 _stacktrace = 11:54:50 11:54:50 def increment( 11:54:50 self, 11:54:50 method: str | None = None, 11:54:50 url: str | None = None, 11:54:50 response: BaseHTTPResponse | None = None, 11:54:50 error: Exception | None = None, 11:54:50 _pool: ConnectionPool | None = None, 11:54:50 _stacktrace: TracebackType | None = None, 11:54:50 ) -> Self: 11:54:50 """Return a new Retry object with incremented retry counters. 11:54:50 11:54:50 :param response: A response object, or None, if the server did not 11:54:50 return a response. 11:54:50 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:54:50 :param Exception error: An error encountered during the request, or 11:54:50 None if the response was received successfully. 11:54:50 11:54:50 :return: A new ``Retry`` object. 11:54:50 """ 11:54:50 if self.total is False and error: 11:54:50 # Disabled, indicate to re-raise the error. 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 11:54:50 total = self.total 11:54:50 if total is not None: 11:54:50 total -= 1 11:54:50 11:54:50 connect = self.connect 11:54:50 read = self.read 11:54:50 redirect = self.redirect 11:54:50 status_count = self.status 11:54:50 other = self.other 11:54:50 cause = "unknown" 11:54:50 status = None 11:54:50 redirect_location = None 11:54:50 11:54:50 if error and self._is_connection_error(error): 11:54:50 # Connect retry? 11:54:50 if connect is False: 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif connect is not None: 11:54:50 connect -= 1 11:54:50 11:54:50 elif error and self._is_read_error(error): 11:54:50 # Read retry? 11:54:50 if read is False or method is None or not self._is_method_retryable(method): 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif read is not None: 11:54:50 read -= 1 11:54:50 11:54:50 elif error: 11:54:50 # Other retry? 11:54:50 if other is not None: 11:54:50 other -= 1 11:54:50 11:54:50 elif response and response.get_redirect_location(): 11:54:50 # Redirect retry? 11:54:50 if redirect is not None: 11:54:50 redirect -= 1 11:54:50 cause = "too many redirects" 11:54:50 response_redirect_location = response.get_redirect_location() 11:54:50 if response_redirect_location: 11:54:50 redirect_location = response_redirect_location 11:54:50 status = response.status 11:54:50 11:54:50 else: 11:54:50 # Incrementing because of a server error like a 500 in 11:54:50 # status_forcelist and the given method is in the allowed_methods 11:54:50 cause = ResponseError.GENERIC_ERROR 11:54:50 if response and response.status: 11:54:50 if status_count is not None: 11:54:50 status_count -= 1 11:54:50 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:54:50 status = response.status 11:54:50 11:54:50 history = self.history + ( 11:54:50 RequestHistory(method, url, error, status, redirect_location), 11:54:50 ) 11:54:50 11:54:50 new_retry = self.new( 11:54:50 total=total, 11:54:50 connect=connect, 11:54:50 read=read, 11:54:50 redirect=redirect, 11:54:50 status=status_count, 11:54:50 other=other, 11:54:50 history=history, 11:54:50 ) 11:54:50 11:54:50 if new_retry.is_exhausted(): 11:54:50 reason = error or ResponseError(cause) 11:54:50 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK1 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 11:54:50 11:54:50 During handling of the above exception, another exception occurred: 11:54:50 11:54:50 self = 11:54:50 11:54:50 def test_10_xpdr_portmapping_NETWORK1(self): 11:54:50 > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-NETWORK1") 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 11:54:50 transportpce_tests/1.2.1/test01_portmapping.py:123: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 11:54:50 response = get_request(target_url) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 transportpce_tests/common/test_utils.py:117: in get_request 11:54:50 return requests.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 11:54:50 return session.request(method=method, url=url, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:54:50 resp = self.send(prep, **send_kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:54:50 r = adapter.send(request, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 except (ProtocolError, OSError) as err: 11:54:50 raise ConnectionError(err, request=request) 11:54:50 11:54:50 except MaxRetryError as e: 11:54:50 if isinstance(e.reason, ConnectTimeoutError): 11:54:50 # TODO: Remove this in 3.0.0: see #2811 11:54:50 if not isinstance(e.reason, NewConnectionError): 11:54:50 raise ConnectTimeout(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, ResponseError): 11:54:50 raise RetryError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _ProxyError): 11:54:50 raise ProxyError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _SSLError): 11:54:50 # This branch is for urllib3 v1.22 and later. 11:54:50 raise SSLError(e, request=request) 11:54:50 11:54:50 > raise ConnectionError(e, request=request) 11:54:50 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK1 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 11:54:50 ----------------------------- Captured stdout call ----------------------------- 11:54:50 execution of test_10_xpdr_portmapping_NETWORK1 11:54:50 ________ TestTransportPCEPortmapping.test_11_xpdr_portmapping_NETWORK2 _________ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 > sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:54:50 raise err 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 address = ('localhost', 8191), timeout = 30, source_address = None 11:54:50 socket_options = [(6, 1, 1)] 11:54:50 11:54:50 def create_connection( 11:54:50 address: tuple[str, int], 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 source_address: tuple[str, int] | None = None, 11:54:50 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:54:50 ) -> socket.socket: 11:54:50 """Connect to *address* and return the socket object. 11:54:50 11:54:50 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:54:50 port)``) and return the socket object. Passing the optional 11:54:50 *timeout* parameter will set the timeout on the socket instance 11:54:50 before attempting to connect. If no *timeout* is supplied, the 11:54:50 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:54:50 is used. If *source_address* is set it must be a tuple of (host, port) 11:54:50 for the socket to bind as a source address before making the connection. 11:54:50 An host of '' or port 0 tells the OS to use the default. 11:54:50 """ 11:54:50 11:54:50 host, port = address 11:54:50 if host.startswith("["): 11:54:50 host = host.strip("[]") 11:54:50 err = None 11:54:50 11:54:50 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:54:50 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:54:50 # The original create_connection function always returns all records. 11:54:50 family = allowed_gai_family() 11:54:50 11:54:50 try: 11:54:50 host.encode("idna") 11:54:50 except UnicodeError: 11:54:50 raise LocationParseError(f"'{host}', label empty or too long") from None 11:54:50 11:54:50 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:54:50 af, socktype, proto, canonname, sa = res 11:54:50 sock = None 11:54:50 try: 11:54:50 sock = socket.socket(af, socktype, proto) 11:54:50 11:54:50 # If provided, set socket level options before connecting. 11:54:50 _set_socket_options(sock, socket_options) 11:54:50 11:54:50 if timeout is not _DEFAULT_TIMEOUT: 11:54:50 sock.settimeout(timeout) 11:54:50 if source_address: 11:54:50 sock.bind(source_address) 11:54:50 > sock.connect(sa) 11:54:50 E ConnectionRefusedError: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2' 11:54:50 body = None 11:54:50 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:54:50 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 redirect = False, assert_same_host = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:54:50 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:54:50 decode_content = False, response_kw = {} 11:54:50 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2', query=None, fragment=None) 11:54:50 destination_scheme = None, conn = None, release_this_conn = True 11:54:50 http_tunnel_required = False, err = None, clean_exit = False 11:54:50 11:54:50 def urlopen( # type: ignore[override] 11:54:50 self, 11:54:50 method: str, 11:54:50 url: str, 11:54:50 body: _TYPE_BODY | None = None, 11:54:50 headers: typing.Mapping[str, str] | None = None, 11:54:50 retries: Retry | bool | int | None = None, 11:54:50 redirect: bool = True, 11:54:50 assert_same_host: bool = True, 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 pool_timeout: int | None = None, 11:54:50 release_conn: bool | None = None, 11:54:50 chunked: bool = False, 11:54:50 body_pos: _TYPE_BODY_POSITION | None = None, 11:54:50 preload_content: bool = True, 11:54:50 decode_content: bool = True, 11:54:50 **response_kw: typing.Any, 11:54:50 ) -> BaseHTTPResponse: 11:54:50 """ 11:54:50 Get a connection from the pool and perform an HTTP request. This is the 11:54:50 lowest level call for making a request, so you'll need to specify all 11:54:50 the raw details. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 More commonly, it's appropriate to use a convenience method 11:54:50 such as :meth:`request`. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 `release_conn` will only behave as expected if 11:54:50 `preload_content=False` because we want to make 11:54:50 `preload_content=False` the default behaviour someday soon without 11:54:50 breaking backwards compatibility. 11:54:50 11:54:50 :param method: 11:54:50 HTTP request method (such as GET, POST, PUT, etc.) 11:54:50 11:54:50 :param url: 11:54:50 The URL to perform the request on. 11:54:50 11:54:50 :param body: 11:54:50 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:54:50 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:54:50 11:54:50 :param headers: 11:54:50 Dictionary of custom headers to send, such as User-Agent, 11:54:50 If-None-Match, etc. If None, pool headers are used. If provided, 11:54:50 these headers completely replace any pool-specific headers. 11:54:50 11:54:50 :param retries: 11:54:50 Configure the number of retries to allow before raising a 11:54:50 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:54:50 11:54:50 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:54:50 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:54:50 over different types of retries. 11:54:50 Pass an integer number to retry connection errors that many times, 11:54:50 but no other types of errors. Pass zero to never retry. 11:54:50 11:54:50 If ``False``, then retries are disabled and any exception is raised 11:54:50 immediately. Also, instead of raising a MaxRetryError on redirects, 11:54:50 the redirect response will be returned. 11:54:50 11:54:50 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:54:50 11:54:50 :param redirect: 11:54:50 If True, automatically handle redirects (status codes 301, 302, 11:54:50 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:54:50 will disable redirect, too. 11:54:50 11:54:50 :param assert_same_host: 11:54:50 If ``True``, will make sure that the host of the pool requests is 11:54:50 consistent else will raise HostChangedError. When ``False``, you can 11:54:50 use the pool on an HTTP proxy and request foreign hosts. 11:54:50 11:54:50 :param timeout: 11:54:50 If specified, overrides the default timeout for this one 11:54:50 request. It may be a float (in seconds) or an instance of 11:54:50 :class:`urllib3.util.Timeout`. 11:54:50 11:54:50 :param pool_timeout: 11:54:50 If set and the pool is set to block=True, then this method will 11:54:50 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:54:50 connection is available within the time period. 11:54:50 11:54:50 :param bool preload_content: 11:54:50 If True, the response's body will be preloaded into memory. 11:54:50 11:54:50 :param bool decode_content: 11:54:50 If True, will attempt to decode the body based on the 11:54:50 'content-encoding' header. 11:54:50 11:54:50 :param release_conn: 11:54:50 If False, then the urlopen call will not release the connection 11:54:50 back into the pool once a response is received (but will release if 11:54:50 you read the entire contents of the response such as when 11:54:50 `preload_content=True`). This is useful if you're not preloading 11:54:50 the response's content immediately. You will need to call 11:54:50 ``r.release_conn()`` on the response ``r`` to return the connection 11:54:50 back into the pool. If None, it takes the value of ``preload_content`` 11:54:50 which defaults to ``True``. 11:54:50 11:54:50 :param bool chunked: 11:54:50 If True, urllib3 will send the body using chunked transfer 11:54:50 encoding. Otherwise, urllib3 will send the body using the standard 11:54:50 content-length form. Defaults to False. 11:54:50 11:54:50 :param int body_pos: 11:54:50 Position to seek to in file-like body in the event of a retry or 11:54:50 redirect. Typically this won't need to be set because urllib3 will 11:54:50 auto-populate the value when needed. 11:54:50 """ 11:54:50 parsed_url = parse_url(url) 11:54:50 destination_scheme = parsed_url.scheme 11:54:50 11:54:50 if headers is None: 11:54:50 headers = self.headers 11:54:50 11:54:50 if not isinstance(retries, Retry): 11:54:50 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:54:50 11:54:50 if release_conn is None: 11:54:50 release_conn = preload_content 11:54:50 11:54:50 # Check host 11:54:50 if assert_same_host and not self.is_same_host(url): 11:54:50 raise HostChangedError(self, url, retries) 11:54:50 11:54:50 # Ensure that the URL we're connecting to is properly encoded 11:54:50 if url.startswith("/"): 11:54:50 url = to_str(_encode_target(url)) 11:54:50 else: 11:54:50 url = to_str(parsed_url.url) 11:54:50 11:54:50 conn = None 11:54:50 11:54:50 # Track whether `conn` needs to be released before 11:54:50 # returning/raising/recursing. Update this variable if necessary, and 11:54:50 # leave `release_conn` constant throughout the function. That way, if 11:54:50 # the function recurses, the original value of `release_conn` will be 11:54:50 # passed down into the recursive call, and its value will be respected. 11:54:50 # 11:54:50 # See issue #651 [1] for details. 11:54:50 # 11:54:50 # [1] 11:54:50 release_this_conn = release_conn 11:54:50 11:54:50 http_tunnel_required = connection_requires_http_tunnel( 11:54:50 self.proxy, self.proxy_config, destination_scheme 11:54:50 ) 11:54:50 11:54:50 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:54:50 # have to copy the headers dict so we can safely change it without those 11:54:50 # changes being reflected in anyone else's copy. 11:54:50 if not http_tunnel_required: 11:54:50 headers = headers.copy() # type: ignore[attr-defined] 11:54:50 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:54:50 11:54:50 # Must keep the exception bound to a separate variable or else Python 3 11:54:50 # complains about UnboundLocalError. 11:54:50 err = None 11:54:50 11:54:50 # Keep track of whether we cleanly exited the except block. This 11:54:50 # ensures we do proper cleanup in finally. 11:54:50 clean_exit = False 11:54:50 11:54:50 # Rewind body position, if needed. Record current position 11:54:50 # for future rewinds in the event of a redirect/retry. 11:54:50 body_pos = set_file_position(body, body_pos) 11:54:50 11:54:50 try: 11:54:50 # Request a connection from the queue. 11:54:50 timeout_obj = self._get_timeout(timeout) 11:54:50 conn = self._get_conn(timeout=pool_timeout) 11:54:50 11:54:50 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:54:50 11:54:50 # Is this a closed/new connection that requires CONNECT tunnelling? 11:54:50 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:54:50 try: 11:54:50 self._prepare_proxy(conn) 11:54:50 except (BaseSSLError, OSError, SocketTimeout) as e: 11:54:50 self._raise_timeout( 11:54:50 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:54:50 ) 11:54:50 raise 11:54:50 11:54:50 # If we're going to release the connection in ``finally:``, then 11:54:50 # the response doesn't need to know about the connection. Otherwise 11:54:50 # it will also try to release it and we'll have a double-release 11:54:50 # mess. 11:54:50 response_conn = conn if not release_conn else None 11:54:50 11:54:50 # Make the request on the HTTPConnection object 11:54:50 > response = self._make_request( 11:54:50 conn, 11:54:50 method, 11:54:50 url, 11:54:50 timeout=timeout_obj, 11:54:50 body=body, 11:54:50 headers=headers, 11:54:50 chunked=chunked, 11:54:50 retries=retries, 11:54:50 response_conn=response_conn, 11:54:50 preload_content=preload_content, 11:54:50 decode_content=decode_content, 11:54:50 **response_kw, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:54:50 conn.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 11:54:50 self.endheaders() 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:54:50 self._send_output(message_body, encode_chunked=encode_chunked) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:54:50 self.send(msg) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:54:50 self.connect() 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 11:54:50 self.sock = self._new_conn() 11:54:50 ^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 except socket.gaierror as e: 11:54:50 raise NameResolutionError(self.host, self, e) from e 11:54:50 except SocketTimeout as e: 11:54:50 raise ConnectTimeoutError( 11:54:50 self, 11:54:50 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:54:50 ) from e 11:54:50 11:54:50 except OSError as e: 11:54:50 > raise NewConnectionError( 11:54:50 self, f"Failed to establish a new connection: {e}" 11:54:50 ) from e 11:54:50 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 > resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:54:50 retries = retries.increment( 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2' 11:54:50 response = None 11:54:50 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 11:54:50 _pool = 11:54:50 _stacktrace = 11:54:50 11:54:50 def increment( 11:54:50 self, 11:54:50 method: str | None = None, 11:54:50 url: str | None = None, 11:54:50 response: BaseHTTPResponse | None = None, 11:54:50 error: Exception | None = None, 11:54:50 _pool: ConnectionPool | None = None, 11:54:50 _stacktrace: TracebackType | None = None, 11:54:50 ) -> Self: 11:54:50 """Return a new Retry object with incremented retry counters. 11:54:50 11:54:50 :param response: A response object, or None, if the server did not 11:54:50 return a response. 11:54:50 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:54:50 :param Exception error: An error encountered during the request, or 11:54:50 None if the response was received successfully. 11:54:50 11:54:50 :return: A new ``Retry`` object. 11:54:50 """ 11:54:50 if self.total is False and error: 11:54:50 # Disabled, indicate to re-raise the error. 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 11:54:50 total = self.total 11:54:50 if total is not None: 11:54:50 total -= 1 11:54:50 11:54:50 connect = self.connect 11:54:50 read = self.read 11:54:50 redirect = self.redirect 11:54:50 status_count = self.status 11:54:50 other = self.other 11:54:50 cause = "unknown" 11:54:50 status = None 11:54:50 redirect_location = None 11:54:50 11:54:50 if error and self._is_connection_error(error): 11:54:50 # Connect retry? 11:54:50 if connect is False: 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif connect is not None: 11:54:50 connect -= 1 11:54:50 11:54:50 elif error and self._is_read_error(error): 11:54:50 # Read retry? 11:54:50 if read is False or method is None or not self._is_method_retryable(method): 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif read is not None: 11:54:50 read -= 1 11:54:50 11:54:50 elif error: 11:54:50 # Other retry? 11:54:50 if other is not None: 11:54:50 other -= 1 11:54:50 11:54:50 elif response and response.get_redirect_location(): 11:54:50 # Redirect retry? 11:54:50 if redirect is not None: 11:54:50 redirect -= 1 11:54:50 cause = "too many redirects" 11:54:50 response_redirect_location = response.get_redirect_location() 11:54:50 if response_redirect_location: 11:54:50 redirect_location = response_redirect_location 11:54:50 status = response.status 11:54:50 11:54:50 else: 11:54:50 # Incrementing because of a server error like a 500 in 11:54:50 # status_forcelist and the given method is in the allowed_methods 11:54:50 cause = ResponseError.GENERIC_ERROR 11:54:50 if response and response.status: 11:54:50 if status_count is not None: 11:54:50 status_count -= 1 11:54:50 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:54:50 status = response.status 11:54:50 11:54:50 history = self.history + ( 11:54:50 RequestHistory(method, url, error, status, redirect_location), 11:54:50 ) 11:54:50 11:54:50 new_retry = self.new( 11:54:50 total=total, 11:54:50 connect=connect, 11:54:50 read=read, 11:54:50 redirect=redirect, 11:54:50 status=status_count, 11:54:50 other=other, 11:54:50 history=history, 11:54:50 ) 11:54:50 11:54:50 if new_retry.is_exhausted(): 11:54:50 reason = error or ResponseError(cause) 11:54:50 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 11:54:50 11:54:50 During handling of the above exception, another exception occurred: 11:54:50 11:54:50 self = 11:54:50 11:54:50 def test_11_xpdr_portmapping_NETWORK2(self): 11:54:50 > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-NETWORK2") 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 11:54:50 transportpce_tests/1.2.1/test01_portmapping.py:135: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 11:54:50 response = get_request(target_url) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 transportpce_tests/common/test_utils.py:117: in get_request 11:54:50 return requests.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 11:54:50 return session.request(method=method, url=url, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:54:50 resp = self.send(prep, **send_kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:54:50 r = adapter.send(request, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 except (ProtocolError, OSError) as err: 11:54:50 raise ConnectionError(err, request=request) 11:54:50 11:54:50 except MaxRetryError as e: 11:54:50 if isinstance(e.reason, ConnectTimeoutError): 11:54:50 # TODO: Remove this in 3.0.0: see #2811 11:54:50 if not isinstance(e.reason, NewConnectionError): 11:54:50 raise ConnectTimeout(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, ResponseError): 11:54:50 raise RetryError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _ProxyError): 11:54:50 raise ProxyError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _SSLError): 11:54:50 # This branch is for urllib3 v1.22 and later. 11:54:50 raise SSLError(e, request=request) 11:54:50 11:54:50 > raise ConnectionError(e, request=request) 11:54:50 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 11:54:50 ----------------------------- Captured stdout call ----------------------------- 11:54:50 execution of test_11_xpdr_portmapping_NETWORK2 11:54:50 _________ TestTransportPCEPortmapping.test_12_xpdr_portmapping_CLIENT1 _________ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 > sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:54:50 raise err 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 address = ('localhost', 8191), timeout = 30, source_address = None 11:54:50 socket_options = [(6, 1, 1)] 11:54:50 11:54:50 def create_connection( 11:54:50 address: tuple[str, int], 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 source_address: tuple[str, int] | None = None, 11:54:50 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:54:50 ) -> socket.socket: 11:54:50 """Connect to *address* and return the socket object. 11:54:50 11:54:50 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:54:50 port)``) and return the socket object. Passing the optional 11:54:50 *timeout* parameter will set the timeout on the socket instance 11:54:50 before attempting to connect. If no *timeout* is supplied, the 11:54:50 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:54:50 is used. If *source_address* is set it must be a tuple of (host, port) 11:54:50 for the socket to bind as a source address before making the connection. 11:54:50 An host of '' or port 0 tells the OS to use the default. 11:54:50 """ 11:54:50 11:54:50 host, port = address 11:54:50 if host.startswith("["): 11:54:50 host = host.strip("[]") 11:54:50 err = None 11:54:50 11:54:50 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:54:50 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:54:50 # The original create_connection function always returns all records. 11:54:50 family = allowed_gai_family() 11:54:50 11:54:50 try: 11:54:50 host.encode("idna") 11:54:50 except UnicodeError: 11:54:50 raise LocationParseError(f"'{host}', label empty or too long") from None 11:54:50 11:54:50 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:54:50 af, socktype, proto, canonname, sa = res 11:54:50 sock = None 11:54:50 try: 11:54:50 sock = socket.socket(af, socktype, proto) 11:54:50 11:54:50 # If provided, set socket level options before connecting. 11:54:50 _set_socket_options(sock, socket_options) 11:54:50 11:54:50 if timeout is not _DEFAULT_TIMEOUT: 11:54:50 sock.settimeout(timeout) 11:54:50 if source_address: 11:54:50 sock.bind(source_address) 11:54:50 > sock.connect(sa) 11:54:50 E ConnectionRefusedError: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1' 11:54:50 body = None 11:54:50 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:54:50 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 redirect = False, assert_same_host = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:54:50 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:54:50 decode_content = False, response_kw = {} 11:54:50 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1', query=None, fragment=None) 11:54:50 destination_scheme = None, conn = None, release_this_conn = True 11:54:50 http_tunnel_required = False, err = None, clean_exit = False 11:54:50 11:54:50 def urlopen( # type: ignore[override] 11:54:50 self, 11:54:50 method: str, 11:54:50 url: str, 11:54:50 body: _TYPE_BODY | None = None, 11:54:50 headers: typing.Mapping[str, str] | None = None, 11:54:50 retries: Retry | bool | int | None = None, 11:54:50 redirect: bool = True, 11:54:50 assert_same_host: bool = True, 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 pool_timeout: int | None = None, 11:54:50 release_conn: bool | None = None, 11:54:50 chunked: bool = False, 11:54:50 body_pos: _TYPE_BODY_POSITION | None = None, 11:54:50 preload_content: bool = True, 11:54:50 decode_content: bool = True, 11:54:50 **response_kw: typing.Any, 11:54:50 ) -> BaseHTTPResponse: 11:54:50 """ 11:54:50 Get a connection from the pool and perform an HTTP request. This is the 11:54:50 lowest level call for making a request, so you'll need to specify all 11:54:50 the raw details. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 More commonly, it's appropriate to use a convenience method 11:54:50 such as :meth:`request`. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 `release_conn` will only behave as expected if 11:54:50 `preload_content=False` because we want to make 11:54:50 `preload_content=False` the default behaviour someday soon without 11:54:50 breaking backwards compatibility. 11:54:50 11:54:50 :param method: 11:54:50 HTTP request method (such as GET, POST, PUT, etc.) 11:54:50 11:54:50 :param url: 11:54:50 The URL to perform the request on. 11:54:50 11:54:50 :param body: 11:54:50 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:54:50 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:54:50 11:54:50 :param headers: 11:54:50 Dictionary of custom headers to send, such as User-Agent, 11:54:50 If-None-Match, etc. If None, pool headers are used. If provided, 11:54:50 these headers completely replace any pool-specific headers. 11:54:50 11:54:50 :param retries: 11:54:50 Configure the number of retries to allow before raising a 11:54:50 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:54:50 11:54:50 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:54:50 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:54:50 over different types of retries. 11:54:50 Pass an integer number to retry connection errors that many times, 11:54:50 but no other types of errors. Pass zero to never retry. 11:54:50 11:54:50 If ``False``, then retries are disabled and any exception is raised 11:54:50 immediately. Also, instead of raising a MaxRetryError on redirects, 11:54:50 the redirect response will be returned. 11:54:50 11:54:50 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:54:50 11:54:50 :param redirect: 11:54:50 If True, automatically handle redirects (status codes 301, 302, 11:54:50 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:54:50 will disable redirect, too. 11:54:50 11:54:50 :param assert_same_host: 11:54:50 If ``True``, will make sure that the host of the pool requests is 11:54:50 consistent else will raise HostChangedError. When ``False``, you can 11:54:50 use the pool on an HTTP proxy and request foreign hosts. 11:54:50 11:54:50 :param timeout: 11:54:50 If specified, overrides the default timeout for this one 11:54:50 request. It may be a float (in seconds) or an instance of 11:54:50 :class:`urllib3.util.Timeout`. 11:54:50 11:54:50 :param pool_timeout: 11:54:50 If set and the pool is set to block=True, then this method will 11:54:50 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:54:50 connection is available within the time period. 11:54:50 11:54:50 :param bool preload_content: 11:54:50 If True, the response's body will be preloaded into memory. 11:54:50 11:54:50 :param bool decode_content: 11:54:50 If True, will attempt to decode the body based on the 11:54:50 'content-encoding' header. 11:54:50 11:54:50 :param release_conn: 11:54:50 If False, then the urlopen call will not release the connection 11:54:50 back into the pool once a response is received (but will release if 11:54:50 you read the entire contents of the response such as when 11:54:50 `preload_content=True`). This is useful if you're not preloading 11:54:50 the response's content immediately. You will need to call 11:54:50 ``r.release_conn()`` on the response ``r`` to return the connection 11:54:50 back into the pool. If None, it takes the value of ``preload_content`` 11:54:50 which defaults to ``True``. 11:54:50 11:54:50 :param bool chunked: 11:54:50 If True, urllib3 will send the body using chunked transfer 11:54:50 encoding. Otherwise, urllib3 will send the body using the standard 11:54:50 content-length form. Defaults to False. 11:54:50 11:54:50 :param int body_pos: 11:54:50 Position to seek to in file-like body in the event of a retry or 11:54:50 redirect. Typically this won't need to be set because urllib3 will 11:54:50 auto-populate the value when needed. 11:54:50 """ 11:54:50 parsed_url = parse_url(url) 11:54:50 destination_scheme = parsed_url.scheme 11:54:50 11:54:50 if headers is None: 11:54:50 headers = self.headers 11:54:50 11:54:50 if not isinstance(retries, Retry): 11:54:50 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:54:50 11:54:50 if release_conn is None: 11:54:50 release_conn = preload_content 11:54:50 11:54:50 # Check host 11:54:50 if assert_same_host and not self.is_same_host(url): 11:54:50 raise HostChangedError(self, url, retries) 11:54:50 11:54:50 # Ensure that the URL we're connecting to is properly encoded 11:54:50 if url.startswith("/"): 11:54:50 url = to_str(_encode_target(url)) 11:54:50 else: 11:54:50 url = to_str(parsed_url.url) 11:54:50 11:54:50 conn = None 11:54:50 11:54:50 # Track whether `conn` needs to be released before 11:54:50 # returning/raising/recursing. Update this variable if necessary, and 11:54:50 # leave `release_conn` constant throughout the function. That way, if 11:54:50 # the function recurses, the original value of `release_conn` will be 11:54:50 # passed down into the recursive call, and its value will be respected. 11:54:50 # 11:54:50 # See issue #651 [1] for details. 11:54:50 # 11:54:50 # [1] 11:54:50 release_this_conn = release_conn 11:54:50 11:54:50 http_tunnel_required = connection_requires_http_tunnel( 11:54:50 self.proxy, self.proxy_config, destination_scheme 11:54:50 ) 11:54:50 11:54:50 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:54:50 # have to copy the headers dict so we can safely change it without those 11:54:50 # changes being reflected in anyone else's copy. 11:54:50 if not http_tunnel_required: 11:54:50 headers = headers.copy() # type: ignore[attr-defined] 11:54:50 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:54:50 11:54:50 # Must keep the exception bound to a separate variable or else Python 3 11:54:50 # complains about UnboundLocalError. 11:54:50 err = None 11:54:50 11:54:50 # Keep track of whether we cleanly exited the except block. This 11:54:50 # ensures we do proper cleanup in finally. 11:54:50 clean_exit = False 11:54:50 11:54:50 # Rewind body position, if needed. Record current position 11:54:50 # for future rewinds in the event of a redirect/retry. 11:54:50 body_pos = set_file_position(body, body_pos) 11:54:50 11:54:50 try: 11:54:50 # Request a connection from the queue. 11:54:50 timeout_obj = self._get_timeout(timeout) 11:54:50 conn = self._get_conn(timeout=pool_timeout) 11:54:50 11:54:50 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:54:50 11:54:50 # Is this a closed/new connection that requires CONNECT tunnelling? 11:54:50 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:54:50 try: 11:54:50 self._prepare_proxy(conn) 11:54:50 except (BaseSSLError, OSError, SocketTimeout) as e: 11:54:50 self._raise_timeout( 11:54:50 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:54:50 ) 11:54:50 raise 11:54:50 11:54:50 # If we're going to release the connection in ``finally:``, then 11:54:50 # the response doesn't need to know about the connection. Otherwise 11:54:50 # it will also try to release it and we'll have a double-release 11:54:50 # mess. 11:54:50 response_conn = conn if not release_conn else None 11:54:50 11:54:50 # Make the request on the HTTPConnection object 11:54:50 > response = self._make_request( 11:54:50 conn, 11:54:50 method, 11:54:50 url, 11:54:50 timeout=timeout_obj, 11:54:50 body=body, 11:54:50 headers=headers, 11:54:50 chunked=chunked, 11:54:50 retries=retries, 11:54:50 response_conn=response_conn, 11:54:50 preload_content=preload_content, 11:54:50 decode_content=decode_content, 11:54:50 **response_kw, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:54:50 conn.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 11:54:50 self.endheaders() 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:54:50 self._send_output(message_body, encode_chunked=encode_chunked) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:54:50 self.send(msg) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:54:50 self.connect() 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 11:54:50 self.sock = self._new_conn() 11:54:50 ^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 except socket.gaierror as e: 11:54:50 raise NameResolutionError(self.host, self, e) from e 11:54:50 except SocketTimeout as e: 11:54:50 raise ConnectTimeoutError( 11:54:50 self, 11:54:50 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:54:50 ) from e 11:54:50 11:54:50 except OSError as e: 11:54:50 > raise NewConnectionError( 11:54:50 self, f"Failed to establish a new connection: {e}" 11:54:50 ) from e 11:54:50 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 > resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:54:50 retries = retries.increment( 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1' 11:54:50 response = None 11:54:50 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 11:54:50 _pool = 11:54:50 _stacktrace = 11:54:50 11:54:50 def increment( 11:54:50 self, 11:54:50 method: str | None = None, 11:54:50 url: str | None = None, 11:54:50 response: BaseHTTPResponse | None = None, 11:54:50 error: Exception | None = None, 11:54:50 _pool: ConnectionPool | None = None, 11:54:50 _stacktrace: TracebackType | None = None, 11:54:50 ) -> Self: 11:54:50 """Return a new Retry object with incremented retry counters. 11:54:50 11:54:50 :param response: A response object, or None, if the server did not 11:54:50 return a response. 11:54:50 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:54:50 :param Exception error: An error encountered during the request, or 11:54:50 None if the response was received successfully. 11:54:50 11:54:50 :return: A new ``Retry`` object. 11:54:50 """ 11:54:50 if self.total is False and error: 11:54:50 # Disabled, indicate to re-raise the error. 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 11:54:50 total = self.total 11:54:50 if total is not None: 11:54:50 total -= 1 11:54:50 11:54:50 connect = self.connect 11:54:50 read = self.read 11:54:50 redirect = self.redirect 11:54:50 status_count = self.status 11:54:50 other = self.other 11:54:50 cause = "unknown" 11:54:50 status = None 11:54:50 redirect_location = None 11:54:50 11:54:50 if error and self._is_connection_error(error): 11:54:50 # Connect retry? 11:54:50 if connect is False: 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif connect is not None: 11:54:50 connect -= 1 11:54:50 11:54:50 elif error and self._is_read_error(error): 11:54:50 # Read retry? 11:54:50 if read is False or method is None or not self._is_method_retryable(method): 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif read is not None: 11:54:50 read -= 1 11:54:50 11:54:50 elif error: 11:54:50 # Other retry? 11:54:50 if other is not None: 11:54:50 other -= 1 11:54:50 11:54:50 elif response and response.get_redirect_location(): 11:54:50 # Redirect retry? 11:54:50 if redirect is not None: 11:54:50 redirect -= 1 11:54:50 cause = "too many redirects" 11:54:50 response_redirect_location = response.get_redirect_location() 11:54:50 if response_redirect_location: 11:54:50 redirect_location = response_redirect_location 11:54:50 status = response.status 11:54:50 11:54:50 else: 11:54:50 # Incrementing because of a server error like a 500 in 11:54:50 # status_forcelist and the given method is in the allowed_methods 11:54:50 cause = ResponseError.GENERIC_ERROR 11:54:50 if response and response.status: 11:54:50 if status_count is not None: 11:54:50 status_count -= 1 11:54:50 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:54:50 status = response.status 11:54:50 11:54:50 history = self.history + ( 11:54:50 RequestHistory(method, url, error, status, redirect_location), 11:54:50 ) 11:54:50 11:54:50 new_retry = self.new( 11:54:50 total=total, 11:54:50 connect=connect, 11:54:50 read=read, 11:54:50 redirect=redirect, 11:54:50 status=status_count, 11:54:50 other=other, 11:54:50 history=history, 11:54:50 ) 11:54:50 11:54:50 if new_retry.is_exhausted(): 11:54:50 reason = error or ResponseError(cause) 11:54:50 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 11:54:50 11:54:50 During handling of the above exception, another exception occurred: 11:54:50 11:54:50 self = 11:54:50 11:54:50 def test_12_xpdr_portmapping_CLIENT1(self): 11:54:50 > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT1") 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 11:54:50 transportpce_tests/1.2.1/test01_portmapping.py:147: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 11:54:50 response = get_request(target_url) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 transportpce_tests/common/test_utils.py:117: in get_request 11:54:50 return requests.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 11:54:50 return session.request(method=method, url=url, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:54:50 resp = self.send(prep, **send_kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:54:50 r = adapter.send(request, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 except (ProtocolError, OSError) as err: 11:54:50 raise ConnectionError(err, request=request) 11:54:50 11:54:50 except MaxRetryError as e: 11:54:50 if isinstance(e.reason, ConnectTimeoutError): 11:54:50 # TODO: Remove this in 3.0.0: see #2811 11:54:50 if not isinstance(e.reason, NewConnectionError): 11:54:50 raise ConnectTimeout(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, ResponseError): 11:54:50 raise RetryError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _ProxyError): 11:54:50 raise ProxyError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _SSLError): 11:54:50 # This branch is for urllib3 v1.22 and later. 11:54:50 raise SSLError(e, request=request) 11:54:50 11:54:50 > raise ConnectionError(e, request=request) 11:54:50 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 11:54:50 ----------------------------- Captured stdout call ----------------------------- 11:54:50 execution of test_12_xpdr_portmapping_CLIENT1 11:54:50 _________ TestTransportPCEPortmapping.test_13_xpdr_portmapping_CLIENT2 _________ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 > sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:54:50 raise err 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 address = ('localhost', 8191), timeout = 30, source_address = None 11:54:50 socket_options = [(6, 1, 1)] 11:54:50 11:54:50 def create_connection( 11:54:50 address: tuple[str, int], 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 source_address: tuple[str, int] | None = None, 11:54:50 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:54:50 ) -> socket.socket: 11:54:50 """Connect to *address* and return the socket object. 11:54:50 11:54:50 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:54:50 port)``) and return the socket object. Passing the optional 11:54:50 *timeout* parameter will set the timeout on the socket instance 11:54:50 before attempting to connect. If no *timeout* is supplied, the 11:54:50 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:54:50 is used. If *source_address* is set it must be a tuple of (host, port) 11:54:50 for the socket to bind as a source address before making the connection. 11:54:50 An host of '' or port 0 tells the OS to use the default. 11:54:50 """ 11:54:50 11:54:50 host, port = address 11:54:50 if host.startswith("["): 11:54:50 host = host.strip("[]") 11:54:50 err = None 11:54:50 11:54:50 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:54:50 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:54:50 # The original create_connection function always returns all records. 11:54:50 family = allowed_gai_family() 11:54:50 11:54:50 try: 11:54:50 host.encode("idna") 11:54:50 except UnicodeError: 11:54:50 raise LocationParseError(f"'{host}', label empty or too long") from None 11:54:50 11:54:50 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:54:50 af, socktype, proto, canonname, sa = res 11:54:50 sock = None 11:54:50 try: 11:54:50 sock = socket.socket(af, socktype, proto) 11:54:50 11:54:50 # If provided, set socket level options before connecting. 11:54:50 _set_socket_options(sock, socket_options) 11:54:50 11:54:50 if timeout is not _DEFAULT_TIMEOUT: 11:54:50 sock.settimeout(timeout) 11:54:50 if source_address: 11:54:50 sock.bind(source_address) 11:54:50 > sock.connect(sa) 11:54:50 E ConnectionRefusedError: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2' 11:54:50 body = None 11:54:50 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:54:50 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 redirect = False, assert_same_host = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:54:50 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:54:50 decode_content = False, response_kw = {} 11:54:50 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2', query=None, fragment=None) 11:54:50 destination_scheme = None, conn = None, release_this_conn = True 11:54:50 http_tunnel_required = False, err = None, clean_exit = False 11:54:50 11:54:50 def urlopen( # type: ignore[override] 11:54:50 self, 11:54:50 method: str, 11:54:50 url: str, 11:54:50 body: _TYPE_BODY | None = None, 11:54:50 headers: typing.Mapping[str, str] | None = None, 11:54:50 retries: Retry | bool | int | None = None, 11:54:50 redirect: bool = True, 11:54:50 assert_same_host: bool = True, 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 pool_timeout: int | None = None, 11:54:50 release_conn: bool | None = None, 11:54:50 chunked: bool = False, 11:54:50 body_pos: _TYPE_BODY_POSITION | None = None, 11:54:50 preload_content: bool = True, 11:54:50 decode_content: bool = True, 11:54:50 **response_kw: typing.Any, 11:54:50 ) -> BaseHTTPResponse: 11:54:50 """ 11:54:50 Get a connection from the pool and perform an HTTP request. This is the 11:54:50 lowest level call for making a request, so you'll need to specify all 11:54:50 the raw details. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 More commonly, it's appropriate to use a convenience method 11:54:50 such as :meth:`request`. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 `release_conn` will only behave as expected if 11:54:50 `preload_content=False` because we want to make 11:54:50 `preload_content=False` the default behaviour someday soon without 11:54:50 breaking backwards compatibility. 11:54:50 11:54:50 :param method: 11:54:50 HTTP request method (such as GET, POST, PUT, etc.) 11:54:50 11:54:50 :param url: 11:54:50 The URL to perform the request on. 11:54:50 11:54:50 :param body: 11:54:50 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:54:50 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:54:50 11:54:50 :param headers: 11:54:50 Dictionary of custom headers to send, such as User-Agent, 11:54:50 If-None-Match, etc. If None, pool headers are used. If provided, 11:54:50 these headers completely replace any pool-specific headers. 11:54:50 11:54:50 :param retries: 11:54:50 Configure the number of retries to allow before raising a 11:54:50 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:54:50 11:54:50 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:54:50 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:54:50 over different types of retries. 11:54:50 Pass an integer number to retry connection errors that many times, 11:54:50 but no other types of errors. Pass zero to never retry. 11:54:50 11:54:50 If ``False``, then retries are disabled and any exception is raised 11:54:50 immediately. Also, instead of raising a MaxRetryError on redirects, 11:54:50 the redirect response will be returned. 11:54:50 11:54:50 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:54:50 11:54:50 :param redirect: 11:54:50 If True, automatically handle redirects (status codes 301, 302, 11:54:50 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:54:50 will disable redirect, too. 11:54:50 11:54:50 :param assert_same_host: 11:54:50 If ``True``, will make sure that the host of the pool requests is 11:54:50 consistent else will raise HostChangedError. When ``False``, you can 11:54:50 use the pool on an HTTP proxy and request foreign hosts. 11:54:50 11:54:50 :param timeout: 11:54:50 If specified, overrides the default timeout for this one 11:54:50 request. It may be a float (in seconds) or an instance of 11:54:50 :class:`urllib3.util.Timeout`. 11:54:50 11:54:50 :param pool_timeout: 11:54:50 If set and the pool is set to block=True, then this method will 11:54:50 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:54:50 connection is available within the time period. 11:54:50 11:54:50 :param bool preload_content: 11:54:50 If True, the response's body will be preloaded into memory. 11:54:50 11:54:50 :param bool decode_content: 11:54:50 If True, will attempt to decode the body based on the 11:54:50 'content-encoding' header. 11:54:50 11:54:50 :param release_conn: 11:54:50 If False, then the urlopen call will not release the connection 11:54:50 back into the pool once a response is received (but will release if 11:54:50 you read the entire contents of the response such as when 11:54:50 `preload_content=True`). This is useful if you're not preloading 11:54:50 the response's content immediately. You will need to call 11:54:50 ``r.release_conn()`` on the response ``r`` to return the connection 11:54:50 back into the pool. If None, it takes the value of ``preload_content`` 11:54:50 which defaults to ``True``. 11:54:50 11:54:50 :param bool chunked: 11:54:50 If True, urllib3 will send the body using chunked transfer 11:54:50 encoding. Otherwise, urllib3 will send the body using the standard 11:54:50 content-length form. Defaults to False. 11:54:50 11:54:50 :param int body_pos: 11:54:50 Position to seek to in file-like body in the event of a retry or 11:54:50 redirect. Typically this won't need to be set because urllib3 will 11:54:50 auto-populate the value when needed. 11:54:50 """ 11:54:50 parsed_url = parse_url(url) 11:54:50 destination_scheme = parsed_url.scheme 11:54:50 11:54:50 if headers is None: 11:54:50 headers = self.headers 11:54:50 11:54:50 if not isinstance(retries, Retry): 11:54:50 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:54:50 11:54:50 if release_conn is None: 11:54:50 release_conn = preload_content 11:54:50 11:54:50 # Check host 11:54:50 if assert_same_host and not self.is_same_host(url): 11:54:50 raise HostChangedError(self, url, retries) 11:54:50 11:54:50 # Ensure that the URL we're connecting to is properly encoded 11:54:50 if url.startswith("/"): 11:54:50 url = to_str(_encode_target(url)) 11:54:50 else: 11:54:50 url = to_str(parsed_url.url) 11:54:50 11:54:50 conn = None 11:54:50 11:54:50 # Track whether `conn` needs to be released before 11:54:50 # returning/raising/recursing. Update this variable if necessary, and 11:54:50 # leave `release_conn` constant throughout the function. That way, if 11:54:50 # the function recurses, the original value of `release_conn` will be 11:54:50 # passed down into the recursive call, and its value will be respected. 11:54:50 # 11:54:50 # See issue #651 [1] for details. 11:54:50 # 11:54:50 # [1] 11:54:50 release_this_conn = release_conn 11:54:50 11:54:50 http_tunnel_required = connection_requires_http_tunnel( 11:54:50 self.proxy, self.proxy_config, destination_scheme 11:54:50 ) 11:54:50 11:54:50 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:54:50 # have to copy the headers dict so we can safely change it without those 11:54:50 # changes being reflected in anyone else's copy. 11:54:50 if not http_tunnel_required: 11:54:50 headers = headers.copy() # type: ignore[attr-defined] 11:54:50 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:54:50 11:54:50 # Must keep the exception bound to a separate variable or else Python 3 11:54:50 # complains about UnboundLocalError. 11:54:50 err = None 11:54:50 11:54:50 # Keep track of whether we cleanly exited the except block. This 11:54:50 # ensures we do proper cleanup in finally. 11:54:50 clean_exit = False 11:54:50 11:54:50 # Rewind body position, if needed. Record current position 11:54:50 # for future rewinds in the event of a redirect/retry. 11:54:50 body_pos = set_file_position(body, body_pos) 11:54:50 11:54:50 try: 11:54:50 # Request a connection from the queue. 11:54:50 timeout_obj = self._get_timeout(timeout) 11:54:50 conn = self._get_conn(timeout=pool_timeout) 11:54:50 11:54:50 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:54:50 11:54:50 # Is this a closed/new connection that requires CONNECT tunnelling? 11:54:50 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:54:50 try: 11:54:50 self._prepare_proxy(conn) 11:54:50 except (BaseSSLError, OSError, SocketTimeout) as e: 11:54:50 self._raise_timeout( 11:54:50 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:54:50 ) 11:54:50 raise 11:54:50 11:54:50 # If we're going to release the connection in ``finally:``, then 11:54:50 # the response doesn't need to know about the connection. Otherwise 11:54:50 # it will also try to release it and we'll have a double-release 11:54:50 # mess. 11:54:50 response_conn = conn if not release_conn else None 11:54:50 11:54:50 # Make the request on the HTTPConnection object 11:54:50 > response = self._make_request( 11:54:50 conn, 11:54:50 method, 11:54:50 url, 11:54:50 timeout=timeout_obj, 11:54:50 body=body, 11:54:50 headers=headers, 11:54:50 chunked=chunked, 11:54:50 retries=retries, 11:54:50 response_conn=response_conn, 11:54:50 preload_content=preload_content, 11:54:50 decode_content=decode_content, 11:54:50 **response_kw, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:54:50 conn.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 11:54:50 self.endheaders() 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:54:50 self._send_output(message_body, encode_chunked=encode_chunked) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:54:50 self.send(msg) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:54:50 self.connect() 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 11:54:50 self.sock = self._new_conn() 11:54:50 ^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 except socket.gaierror as e: 11:54:50 raise NameResolutionError(self.host, self, e) from e 11:54:50 except SocketTimeout as e: 11:54:50 raise ConnectTimeoutError( 11:54:50 self, 11:54:50 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:54:50 ) from e 11:54:50 11:54:50 except OSError as e: 11:54:50 > raise NewConnectionError( 11:54:50 self, f"Failed to establish a new connection: {e}" 11:54:50 ) from e 11:54:50 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 > resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:54:50 retries = retries.increment( 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2' 11:54:50 response = None 11:54:50 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 11:54:50 _pool = 11:54:50 _stacktrace = 11:54:50 11:54:50 def increment( 11:54:50 self, 11:54:50 method: str | None = None, 11:54:50 url: str | None = None, 11:54:50 response: BaseHTTPResponse | None = None, 11:54:50 error: Exception | None = None, 11:54:50 _pool: ConnectionPool | None = None, 11:54:50 _stacktrace: TracebackType | None = None, 11:54:50 ) -> Self: 11:54:50 """Return a new Retry object with incremented retry counters. 11:54:50 11:54:50 :param response: A response object, or None, if the server did not 11:54:50 return a response. 11:54:50 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:54:50 :param Exception error: An error encountered during the request, or 11:54:50 None if the response was received successfully. 11:54:50 11:54:50 :return: A new ``Retry`` object. 11:54:50 """ 11:54:50 if self.total is False and error: 11:54:50 # Disabled, indicate to re-raise the error. 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 11:54:50 total = self.total 11:54:50 if total is not None: 11:54:50 total -= 1 11:54:50 11:54:50 connect = self.connect 11:54:50 read = self.read 11:54:50 redirect = self.redirect 11:54:50 status_count = self.status 11:54:50 other = self.other 11:54:50 cause = "unknown" 11:54:50 status = None 11:54:50 redirect_location = None 11:54:50 11:54:50 if error and self._is_connection_error(error): 11:54:50 # Connect retry? 11:54:50 if connect is False: 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif connect is not None: 11:54:50 connect -= 1 11:54:50 11:54:50 elif error and self._is_read_error(error): 11:54:50 # Read retry? 11:54:50 if read is False or method is None or not self._is_method_retryable(method): 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif read is not None: 11:54:50 read -= 1 11:54:50 11:54:50 elif error: 11:54:50 # Other retry? 11:54:50 if other is not None: 11:54:50 other -= 1 11:54:50 11:54:50 elif response and response.get_redirect_location(): 11:54:50 # Redirect retry? 11:54:50 if redirect is not None: 11:54:50 redirect -= 1 11:54:50 cause = "too many redirects" 11:54:50 response_redirect_location = response.get_redirect_location() 11:54:50 if response_redirect_location: 11:54:50 redirect_location = response_redirect_location 11:54:50 status = response.status 11:54:50 11:54:50 else: 11:54:50 # Incrementing because of a server error like a 500 in 11:54:50 # status_forcelist and the given method is in the allowed_methods 11:54:50 cause = ResponseError.GENERIC_ERROR 11:54:50 if response and response.status: 11:54:50 if status_count is not None: 11:54:50 status_count -= 1 11:54:50 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:54:50 status = response.status 11:54:50 11:54:50 history = self.history + ( 11:54:50 RequestHistory(method, url, error, status, redirect_location), 11:54:50 ) 11:54:50 11:54:50 new_retry = self.new( 11:54:50 total=total, 11:54:50 connect=connect, 11:54:50 read=read, 11:54:50 redirect=redirect, 11:54:50 status=status_count, 11:54:50 other=other, 11:54:50 history=history, 11:54:50 ) 11:54:50 11:54:50 if new_retry.is_exhausted(): 11:54:50 reason = error or ResponseError(cause) 11:54:50 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 11:54:50 11:54:50 During handling of the above exception, another exception occurred: 11:54:50 11:54:50 self = 11:54:50 11:54:50 def test_13_xpdr_portmapping_CLIENT2(self): 11:54:50 > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT2") 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 11:54:50 transportpce_tests/1.2.1/test01_portmapping.py:159: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 11:54:50 response = get_request(target_url) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 transportpce_tests/common/test_utils.py:117: in get_request 11:54:50 return requests.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 11:54:50 return session.request(method=method, url=url, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:54:50 resp = self.send(prep, **send_kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:54:50 r = adapter.send(request, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 except (ProtocolError, OSError) as err: 11:54:50 raise ConnectionError(err, request=request) 11:54:50 11:54:50 except MaxRetryError as e: 11:54:50 if isinstance(e.reason, ConnectTimeoutError): 11:54:50 # TODO: Remove this in 3.0.0: see #2811 11:54:50 if not isinstance(e.reason, NewConnectionError): 11:54:50 raise ConnectTimeout(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, ResponseError): 11:54:50 raise RetryError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _ProxyError): 11:54:50 raise ProxyError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _SSLError): 11:54:50 # This branch is for urllib3 v1.22 and later. 11:54:50 raise SSLError(e, request=request) 11:54:50 11:54:50 > raise ConnectionError(e, request=request) 11:54:50 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 11:54:50 ----------------------------- Captured stdout call ----------------------------- 11:54:50 execution of test_13_xpdr_portmapping_CLIENT2 11:54:50 _________ TestTransportPCEPortmapping.test_14_xpdr_portmapping_CLIENT3 _________ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 > sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:54:50 raise err 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 address = ('localhost', 8191), timeout = 30, source_address = None 11:54:50 socket_options = [(6, 1, 1)] 11:54:50 11:54:50 def create_connection( 11:54:50 address: tuple[str, int], 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 source_address: tuple[str, int] | None = None, 11:54:50 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:54:50 ) -> socket.socket: 11:54:50 """Connect to *address* and return the socket object. 11:54:50 11:54:50 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:54:50 port)``) and return the socket object. Passing the optional 11:54:50 *timeout* parameter will set the timeout on the socket instance 11:54:50 before attempting to connect. If no *timeout* is supplied, the 11:54:50 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:54:50 is used. If *source_address* is set it must be a tuple of (host, port) 11:54:50 for the socket to bind as a source address before making the connection. 11:54:50 An host of '' or port 0 tells the OS to use the default. 11:54:50 """ 11:54:50 11:54:50 host, port = address 11:54:50 if host.startswith("["): 11:54:50 host = host.strip("[]") 11:54:50 err = None 11:54:50 11:54:50 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:54:50 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:54:50 # The original create_connection function always returns all records. 11:54:50 family = allowed_gai_family() 11:54:50 11:54:50 try: 11:54:50 host.encode("idna") 11:54:50 except UnicodeError: 11:54:50 raise LocationParseError(f"'{host}', label empty or too long") from None 11:54:50 11:54:50 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:54:50 af, socktype, proto, canonname, sa = res 11:54:50 sock = None 11:54:50 try: 11:54:50 sock = socket.socket(af, socktype, proto) 11:54:50 11:54:50 # If provided, set socket level options before connecting. 11:54:50 _set_socket_options(sock, socket_options) 11:54:50 11:54:50 if timeout is not _DEFAULT_TIMEOUT: 11:54:50 sock.settimeout(timeout) 11:54:50 if source_address: 11:54:50 sock.bind(source_address) 11:54:50 > sock.connect(sa) 11:54:50 E ConnectionRefusedError: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3' 11:54:50 body = None 11:54:50 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:54:50 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 redirect = False, assert_same_host = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:54:50 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:54:50 decode_content = False, response_kw = {} 11:54:50 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3', query=None, fragment=None) 11:54:50 destination_scheme = None, conn = None, release_this_conn = True 11:54:50 http_tunnel_required = False, err = None, clean_exit = False 11:54:50 11:54:50 def urlopen( # type: ignore[override] 11:54:50 self, 11:54:50 method: str, 11:54:50 url: str, 11:54:50 body: _TYPE_BODY | None = None, 11:54:50 headers: typing.Mapping[str, str] | None = None, 11:54:50 retries: Retry | bool | int | None = None, 11:54:50 redirect: bool = True, 11:54:50 assert_same_host: bool = True, 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 pool_timeout: int | None = None, 11:54:50 release_conn: bool | None = None, 11:54:50 chunked: bool = False, 11:54:50 body_pos: _TYPE_BODY_POSITION | None = None, 11:54:50 preload_content: bool = True, 11:54:50 decode_content: bool = True, 11:54:50 **response_kw: typing.Any, 11:54:50 ) -> BaseHTTPResponse: 11:54:50 """ 11:54:50 Get a connection from the pool and perform an HTTP request. This is the 11:54:50 lowest level call for making a request, so you'll need to specify all 11:54:50 the raw details. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 More commonly, it's appropriate to use a convenience method 11:54:50 such as :meth:`request`. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 `release_conn` will only behave as expected if 11:54:50 `preload_content=False` because we want to make 11:54:50 `preload_content=False` the default behaviour someday soon without 11:54:50 breaking backwards compatibility. 11:54:50 11:54:50 :param method: 11:54:50 HTTP request method (such as GET, POST, PUT, etc.) 11:54:50 11:54:50 :param url: 11:54:50 The URL to perform the request on. 11:54:50 11:54:50 :param body: 11:54:50 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:54:50 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:54:50 11:54:50 :param headers: 11:54:50 Dictionary of custom headers to send, such as User-Agent, 11:54:50 If-None-Match, etc. If None, pool headers are used. If provided, 11:54:50 these headers completely replace any pool-specific headers. 11:54:50 11:54:50 :param retries: 11:54:50 Configure the number of retries to allow before raising a 11:54:50 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:54:50 11:54:50 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:54:50 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:54:50 over different types of retries. 11:54:50 Pass an integer number to retry connection errors that many times, 11:54:50 but no other types of errors. Pass zero to never retry. 11:54:50 11:54:50 If ``False``, then retries are disabled and any exception is raised 11:54:50 immediately. Also, instead of raising a MaxRetryError on redirects, 11:54:50 the redirect response will be returned. 11:54:50 11:54:50 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:54:50 11:54:50 :param redirect: 11:54:50 If True, automatically handle redirects (status codes 301, 302, 11:54:50 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:54:50 will disable redirect, too. 11:54:50 11:54:50 :param assert_same_host: 11:54:50 If ``True``, will make sure that the host of the pool requests is 11:54:50 consistent else will raise HostChangedError. When ``False``, you can 11:54:50 use the pool on an HTTP proxy and request foreign hosts. 11:54:50 11:54:50 :param timeout: 11:54:50 If specified, overrides the default timeout for this one 11:54:50 request. It may be a float (in seconds) or an instance of 11:54:50 :class:`urllib3.util.Timeout`. 11:54:50 11:54:50 :param pool_timeout: 11:54:50 If set and the pool is set to block=True, then this method will 11:54:50 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:54:50 connection is available within the time period. 11:54:50 11:54:50 :param bool preload_content: 11:54:50 If True, the response's body will be preloaded into memory. 11:54:50 11:54:50 :param bool decode_content: 11:54:50 If True, will attempt to decode the body based on the 11:54:50 'content-encoding' header. 11:54:50 11:54:50 :param release_conn: 11:54:50 If False, then the urlopen call will not release the connection 11:54:50 back into the pool once a response is received (but will release if 11:54:50 you read the entire contents of the response such as when 11:54:50 `preload_content=True`). This is useful if you're not preloading 11:54:50 the response's content immediately. You will need to call 11:54:50 ``r.release_conn()`` on the response ``r`` to return the connection 11:54:50 back into the pool. If None, it takes the value of ``preload_content`` 11:54:50 which defaults to ``True``. 11:54:50 11:54:50 :param bool chunked: 11:54:50 If True, urllib3 will send the body using chunked transfer 11:54:50 encoding. Otherwise, urllib3 will send the body using the standard 11:54:50 content-length form. Defaults to False. 11:54:50 11:54:50 :param int body_pos: 11:54:50 Position to seek to in file-like body in the event of a retry or 11:54:50 redirect. Typically this won't need to be set because urllib3 will 11:54:50 auto-populate the value when needed. 11:54:50 """ 11:54:50 parsed_url = parse_url(url) 11:54:50 destination_scheme = parsed_url.scheme 11:54:50 11:54:50 if headers is None: 11:54:50 headers = self.headers 11:54:50 11:54:50 if not isinstance(retries, Retry): 11:54:50 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:54:50 11:54:50 if release_conn is None: 11:54:50 release_conn = preload_content 11:54:50 11:54:50 # Check host 11:54:50 if assert_same_host and not self.is_same_host(url): 11:54:50 raise HostChangedError(self, url, retries) 11:54:50 11:54:50 # Ensure that the URL we're connecting to is properly encoded 11:54:50 if url.startswith("/"): 11:54:50 url = to_str(_encode_target(url)) 11:54:50 else: 11:54:50 url = to_str(parsed_url.url) 11:54:50 11:54:50 conn = None 11:54:50 11:54:50 # Track whether `conn` needs to be released before 11:54:50 # returning/raising/recursing. Update this variable if necessary, and 11:54:50 # leave `release_conn` constant throughout the function. That way, if 11:54:50 # the function recurses, the original value of `release_conn` will be 11:54:50 # passed down into the recursive call, and its value will be respected. 11:54:50 # 11:54:50 # See issue #651 [1] for details. 11:54:50 # 11:54:50 # [1] 11:54:50 release_this_conn = release_conn 11:54:50 11:54:50 http_tunnel_required = connection_requires_http_tunnel( 11:54:50 self.proxy, self.proxy_config, destination_scheme 11:54:50 ) 11:54:50 11:54:50 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:54:50 # have to copy the headers dict so we can safely change it without those 11:54:50 # changes being reflected in anyone else's copy. 11:54:50 if not http_tunnel_required: 11:54:50 headers = headers.copy() # type: ignore[attr-defined] 11:54:50 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:54:50 11:54:50 # Must keep the exception bound to a separate variable or else Python 3 11:54:50 # complains about UnboundLocalError. 11:54:50 err = None 11:54:50 11:54:50 # Keep track of whether we cleanly exited the except block. This 11:54:50 # ensures we do proper cleanup in finally. 11:54:50 clean_exit = False 11:54:50 11:54:50 # Rewind body position, if needed. Record current position 11:54:50 # for future rewinds in the event of a redirect/retry. 11:54:50 body_pos = set_file_position(body, body_pos) 11:54:50 11:54:50 try: 11:54:50 # Request a connection from the queue. 11:54:50 timeout_obj = self._get_timeout(timeout) 11:54:50 conn = self._get_conn(timeout=pool_timeout) 11:54:50 11:54:50 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:54:50 11:54:50 # Is this a closed/new connection that requires CONNECT tunnelling? 11:54:50 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:54:50 try: 11:54:50 self._prepare_proxy(conn) 11:54:50 except (BaseSSLError, OSError, SocketTimeout) as e: 11:54:50 self._raise_timeout( 11:54:50 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:54:50 ) 11:54:50 raise 11:54:50 11:54:50 # If we're going to release the connection in ``finally:``, then 11:54:50 # the response doesn't need to know about the connection. Otherwise 11:54:50 # it will also try to release it and we'll have a double-release 11:54:50 # mess. 11:54:50 response_conn = conn if not release_conn else None 11:54:50 11:54:50 # Make the request on the HTTPConnection object 11:54:50 > response = self._make_request( 11:54:50 conn, 11:54:50 method, 11:54:50 url, 11:54:50 timeout=timeout_obj, 11:54:50 body=body, 11:54:50 headers=headers, 11:54:50 chunked=chunked, 11:54:50 retries=retries, 11:54:50 response_conn=response_conn, 11:54:50 preload_content=preload_content, 11:54:50 decode_content=decode_content, 11:54:50 **response_kw, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:54:50 conn.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 11:54:50 self.endheaders() 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:54:50 self._send_output(message_body, encode_chunked=encode_chunked) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:54:50 self.send(msg) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:54:50 self.connect() 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 11:54:50 self.sock = self._new_conn() 11:54:50 ^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 except socket.gaierror as e: 11:54:50 raise NameResolutionError(self.host, self, e) from e 11:54:50 except SocketTimeout as e: 11:54:50 raise ConnectTimeoutError( 11:54:50 self, 11:54:50 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:54:50 ) from e 11:54:50 11:54:50 except OSError as e: 11:54:50 > raise NewConnectionError( 11:54:50 self, f"Failed to establish a new connection: {e}" 11:54:50 ) from e 11:54:50 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 > resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:54:50 retries = retries.increment( 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3' 11:54:50 response = None 11:54:50 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 11:54:50 _pool = 11:54:50 _stacktrace = 11:54:50 11:54:50 def increment( 11:54:50 self, 11:54:50 method: str | None = None, 11:54:50 url: str | None = None, 11:54:50 response: BaseHTTPResponse | None = None, 11:54:50 error: Exception | None = None, 11:54:50 _pool: ConnectionPool | None = None, 11:54:50 _stacktrace: TracebackType | None = None, 11:54:50 ) -> Self: 11:54:50 """Return a new Retry object with incremented retry counters. 11:54:50 11:54:50 :param response: A response object, or None, if the server did not 11:54:50 return a response. 11:54:50 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:54:50 :param Exception error: An error encountered during the request, or 11:54:50 None if the response was received successfully. 11:54:50 11:54:50 :return: A new ``Retry`` object. 11:54:50 """ 11:54:50 if self.total is False and error: 11:54:50 # Disabled, indicate to re-raise the error. 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 11:54:50 total = self.total 11:54:50 if total is not None: 11:54:50 total -= 1 11:54:50 11:54:50 connect = self.connect 11:54:50 read = self.read 11:54:50 redirect = self.redirect 11:54:50 status_count = self.status 11:54:50 other = self.other 11:54:50 cause = "unknown" 11:54:50 status = None 11:54:50 redirect_location = None 11:54:50 11:54:50 if error and self._is_connection_error(error): 11:54:50 # Connect retry? 11:54:50 if connect is False: 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif connect is not None: 11:54:50 connect -= 1 11:54:50 11:54:50 elif error and self._is_read_error(error): 11:54:50 # Read retry? 11:54:50 if read is False or method is None or not self._is_method_retryable(method): 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif read is not None: 11:54:50 read -= 1 11:54:50 11:54:50 elif error: 11:54:50 # Other retry? 11:54:50 if other is not None: 11:54:50 other -= 1 11:54:50 11:54:50 elif response and response.get_redirect_location(): 11:54:50 # Redirect retry? 11:54:50 if redirect is not None: 11:54:50 redirect -= 1 11:54:50 cause = "too many redirects" 11:54:50 response_redirect_location = response.get_redirect_location() 11:54:50 if response_redirect_location: 11:54:50 redirect_location = response_redirect_location 11:54:50 status = response.status 11:54:50 11:54:50 else: 11:54:50 # Incrementing because of a server error like a 500 in 11:54:50 # status_forcelist and the given method is in the allowed_methods 11:54:50 cause = ResponseError.GENERIC_ERROR 11:54:50 if response and response.status: 11:54:50 if status_count is not None: 11:54:50 status_count -= 1 11:54:50 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:54:50 status = response.status 11:54:50 11:54:50 history = self.history + ( 11:54:50 RequestHistory(method, url, error, status, redirect_location), 11:54:50 ) 11:54:50 11:54:50 new_retry = self.new( 11:54:50 total=total, 11:54:50 connect=connect, 11:54:50 read=read, 11:54:50 redirect=redirect, 11:54:50 status=status_count, 11:54:50 other=other, 11:54:50 history=history, 11:54:50 ) 11:54:50 11:54:50 if new_retry.is_exhausted(): 11:54:50 reason = error or ResponseError(cause) 11:54:50 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 11:54:50 11:54:50 During handling of the above exception, another exception occurred: 11:54:50 11:54:50 self = 11:54:50 11:54:50 def test_14_xpdr_portmapping_CLIENT3(self): 11:54:50 > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT3") 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 11:54:50 transportpce_tests/1.2.1/test01_portmapping.py:170: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 11:54:50 response = get_request(target_url) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 transportpce_tests/common/test_utils.py:117: in get_request 11:54:50 return requests.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 11:54:50 return session.request(method=method, url=url, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:54:50 resp = self.send(prep, **send_kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:54:50 r = adapter.send(request, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 except (ProtocolError, OSError) as err: 11:54:50 raise ConnectionError(err, request=request) 11:54:50 11:54:50 except MaxRetryError as e: 11:54:50 if isinstance(e.reason, ConnectTimeoutError): 11:54:50 # TODO: Remove this in 3.0.0: see #2811 11:54:50 if not isinstance(e.reason, NewConnectionError): 11:54:50 raise ConnectTimeout(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, ResponseError): 11:54:50 raise RetryError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _ProxyError): 11:54:50 raise ProxyError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _SSLError): 11:54:50 # This branch is for urllib3 v1.22 and later. 11:54:50 raise SSLError(e, request=request) 11:54:50 11:54:50 > raise ConnectionError(e, request=request) 11:54:50 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 11:54:50 ----------------------------- Captured stdout call ----------------------------- 11:54:50 execution of test_14_xpdr_portmapping_CLIENT3 11:54:50 _________ TestTransportPCEPortmapping.test_15_xpdr_portmapping_CLIENT4 _________ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 > sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:54:50 raise err 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 address = ('localhost', 8191), timeout = 30, source_address = None 11:54:50 socket_options = [(6, 1, 1)] 11:54:50 11:54:50 def create_connection( 11:54:50 address: tuple[str, int], 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 source_address: tuple[str, int] | None = None, 11:54:50 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:54:50 ) -> socket.socket: 11:54:50 """Connect to *address* and return the socket object. 11:54:50 11:54:50 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:54:50 port)``) and return the socket object. Passing the optional 11:54:50 *timeout* parameter will set the timeout on the socket instance 11:54:50 before attempting to connect. If no *timeout* is supplied, the 11:54:50 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:54:50 is used. If *source_address* is set it must be a tuple of (host, port) 11:54:50 for the socket to bind as a source address before making the connection. 11:54:50 An host of '' or port 0 tells the OS to use the default. 11:54:50 """ 11:54:50 11:54:50 host, port = address 11:54:50 if host.startswith("["): 11:54:50 host = host.strip("[]") 11:54:50 err = None 11:54:50 11:54:50 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:54:50 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:54:50 # The original create_connection function always returns all records. 11:54:50 family = allowed_gai_family() 11:54:50 11:54:50 try: 11:54:50 host.encode("idna") 11:54:50 except UnicodeError: 11:54:50 raise LocationParseError(f"'{host}', label empty or too long") from None 11:54:50 11:54:50 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:54:50 af, socktype, proto, canonname, sa = res 11:54:50 sock = None 11:54:50 try: 11:54:50 sock = socket.socket(af, socktype, proto) 11:54:50 11:54:50 # If provided, set socket level options before connecting. 11:54:50 _set_socket_options(sock, socket_options) 11:54:50 11:54:50 if timeout is not _DEFAULT_TIMEOUT: 11:54:50 sock.settimeout(timeout) 11:54:50 if source_address: 11:54:50 sock.bind(source_address) 11:54:50 > sock.connect(sa) 11:54:50 E ConnectionRefusedError: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4' 11:54:50 body = None 11:54:50 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:54:50 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 redirect = False, assert_same_host = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:54:50 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:54:50 decode_content = False, response_kw = {} 11:54:50 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4', query=None, fragment=None) 11:54:50 destination_scheme = None, conn = None, release_this_conn = True 11:54:50 http_tunnel_required = False, err = None, clean_exit = False 11:54:50 11:54:50 def urlopen( # type: ignore[override] 11:54:50 self, 11:54:50 method: str, 11:54:50 url: str, 11:54:50 body: _TYPE_BODY | None = None, 11:54:50 headers: typing.Mapping[str, str] | None = None, 11:54:50 retries: Retry | bool | int | None = None, 11:54:50 redirect: bool = True, 11:54:50 assert_same_host: bool = True, 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 pool_timeout: int | None = None, 11:54:50 release_conn: bool | None = None, 11:54:50 chunked: bool = False, 11:54:50 body_pos: _TYPE_BODY_POSITION | None = None, 11:54:50 preload_content: bool = True, 11:54:50 decode_content: bool = True, 11:54:50 **response_kw: typing.Any, 11:54:50 ) -> BaseHTTPResponse: 11:54:50 """ 11:54:50 Get a connection from the pool and perform an HTTP request. This is the 11:54:50 lowest level call for making a request, so you'll need to specify all 11:54:50 the raw details. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 More commonly, it's appropriate to use a convenience method 11:54:50 such as :meth:`request`. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 `release_conn` will only behave as expected if 11:54:50 `preload_content=False` because we want to make 11:54:50 `preload_content=False` the default behaviour someday soon without 11:54:50 breaking backwards compatibility. 11:54:50 11:54:50 :param method: 11:54:50 HTTP request method (such as GET, POST, PUT, etc.) 11:54:50 11:54:50 :param url: 11:54:50 The URL to perform the request on. 11:54:50 11:54:50 :param body: 11:54:50 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:54:50 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:54:50 11:54:50 :param headers: 11:54:50 Dictionary of custom headers to send, such as User-Agent, 11:54:50 If-None-Match, etc. If None, pool headers are used. If provided, 11:54:50 these headers completely replace any pool-specific headers. 11:54:50 11:54:50 :param retries: 11:54:50 Configure the number of retries to allow before raising a 11:54:50 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:54:50 11:54:50 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:54:50 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:54:50 over different types of retries. 11:54:50 Pass an integer number to retry connection errors that many times, 11:54:50 but no other types of errors. Pass zero to never retry. 11:54:50 11:54:50 If ``False``, then retries are disabled and any exception is raised 11:54:50 immediately. Also, instead of raising a MaxRetryError on redirects, 11:54:50 the redirect response will be returned. 11:54:50 11:54:50 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:54:50 11:54:50 :param redirect: 11:54:50 If True, automatically handle redirects (status codes 301, 302, 11:54:50 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:54:50 will disable redirect, too. 11:54:50 11:54:50 :param assert_same_host: 11:54:50 If ``True``, will make sure that the host of the pool requests is 11:54:50 consistent else will raise HostChangedError. When ``False``, you can 11:54:50 use the pool on an HTTP proxy and request foreign hosts. 11:54:50 11:54:50 :param timeout: 11:54:50 If specified, overrides the default timeout for this one 11:54:50 request. It may be a float (in seconds) or an instance of 11:54:50 :class:`urllib3.util.Timeout`. 11:54:50 11:54:50 :param pool_timeout: 11:54:50 If set and the pool is set to block=True, then this method will 11:54:50 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:54:50 connection is available within the time period. 11:54:50 11:54:50 :param bool preload_content: 11:54:50 If True, the response's body will be preloaded into memory. 11:54:50 11:54:50 :param bool decode_content: 11:54:50 If True, will attempt to decode the body based on the 11:54:50 'content-encoding' header. 11:54:50 11:54:50 :param release_conn: 11:54:50 If False, then the urlopen call will not release the connection 11:54:50 back into the pool once a response is received (but will release if 11:54:50 you read the entire contents of the response such as when 11:54:50 `preload_content=True`). This is useful if you're not preloading 11:54:50 the response's content immediately. You will need to call 11:54:50 ``r.release_conn()`` on the response ``r`` to return the connection 11:54:50 back into the pool. If None, it takes the value of ``preload_content`` 11:54:50 which defaults to ``True``. 11:54:50 11:54:50 :param bool chunked: 11:54:50 If True, urllib3 will send the body using chunked transfer 11:54:50 encoding. Otherwise, urllib3 will send the body using the standard 11:54:50 content-length form. Defaults to False. 11:54:50 11:54:50 :param int body_pos: 11:54:50 Position to seek to in file-like body in the event of a retry or 11:54:50 redirect. Typically this won't need to be set because urllib3 will 11:54:50 auto-populate the value when needed. 11:54:50 """ 11:54:50 parsed_url = parse_url(url) 11:54:50 destination_scheme = parsed_url.scheme 11:54:50 11:54:50 if headers is None: 11:54:50 headers = self.headers 11:54:50 11:54:50 if not isinstance(retries, Retry): 11:54:50 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:54:50 11:54:50 if release_conn is None: 11:54:50 release_conn = preload_content 11:54:50 11:54:50 # Check host 11:54:50 if assert_same_host and not self.is_same_host(url): 11:54:50 raise HostChangedError(self, url, retries) 11:54:50 11:54:50 # Ensure that the URL we're connecting to is properly encoded 11:54:50 if url.startswith("/"): 11:54:50 url = to_str(_encode_target(url)) 11:54:50 else: 11:54:50 url = to_str(parsed_url.url) 11:54:50 11:54:50 conn = None 11:54:50 11:54:50 # Track whether `conn` needs to be released before 11:54:50 # returning/raising/recursing. Update this variable if necessary, and 11:54:50 # leave `release_conn` constant throughout the function. That way, if 11:54:50 # the function recurses, the original value of `release_conn` will be 11:54:50 # passed down into the recursive call, and its value will be respected. 11:54:50 # 11:54:50 # See issue #651 [1] for details. 11:54:50 # 11:54:50 # [1] 11:54:50 release_this_conn = release_conn 11:54:50 11:54:50 http_tunnel_required = connection_requires_http_tunnel( 11:54:50 self.proxy, self.proxy_config, destination_scheme 11:54:50 ) 11:54:50 11:54:50 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:54:50 # have to copy the headers dict so we can safely change it without those 11:54:50 # changes being reflected in anyone else's copy. 11:54:50 if not http_tunnel_required: 11:54:50 headers = headers.copy() # type: ignore[attr-defined] 11:54:50 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:54:50 11:54:50 # Must keep the exception bound to a separate variable or else Python 3 11:54:50 # complains about UnboundLocalError. 11:54:50 err = None 11:54:50 11:54:50 # Keep track of whether we cleanly exited the except block. This 11:54:50 # ensures we do proper cleanup in finally. 11:54:50 clean_exit = False 11:54:50 11:54:50 # Rewind body position, if needed. Record current position 11:54:50 # for future rewinds in the event of a redirect/retry. 11:54:50 body_pos = set_file_position(body, body_pos) 11:54:50 11:54:50 try: 11:54:50 # Request a connection from the queue. 11:54:50 timeout_obj = self._get_timeout(timeout) 11:54:50 conn = self._get_conn(timeout=pool_timeout) 11:54:50 11:54:50 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:54:50 11:54:50 # Is this a closed/new connection that requires CONNECT tunnelling? 11:54:50 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:54:50 try: 11:54:50 self._prepare_proxy(conn) 11:54:50 except (BaseSSLError, OSError, SocketTimeout) as e: 11:54:50 self._raise_timeout( 11:54:50 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:54:50 ) 11:54:50 raise 11:54:50 11:54:50 # If we're going to release the connection in ``finally:``, then 11:54:50 # the response doesn't need to know about the connection. Otherwise 11:54:50 # it will also try to release it and we'll have a double-release 11:54:50 # mess. 11:54:50 response_conn = conn if not release_conn else None 11:54:50 11:54:50 # Make the request on the HTTPConnection object 11:54:50 > response = self._make_request( 11:54:50 conn, 11:54:50 method, 11:54:50 url, 11:54:50 timeout=timeout_obj, 11:54:50 body=body, 11:54:50 headers=headers, 11:54:50 chunked=chunked, 11:54:50 retries=retries, 11:54:50 response_conn=response_conn, 11:54:50 preload_content=preload_content, 11:54:50 decode_content=decode_content, 11:54:50 **response_kw, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:54:50 conn.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 11:54:50 self.endheaders() 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:54:50 self._send_output(message_body, encode_chunked=encode_chunked) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:54:50 self.send(msg) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:54:50 self.connect() 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 11:54:50 self.sock = self._new_conn() 11:54:50 ^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 except socket.gaierror as e: 11:54:50 raise NameResolutionError(self.host, self, e) from e 11:54:50 except SocketTimeout as e: 11:54:50 raise ConnectTimeoutError( 11:54:50 self, 11:54:50 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:54:50 ) from e 11:54:50 11:54:50 except OSError as e: 11:54:50 > raise NewConnectionError( 11:54:50 self, f"Failed to establish a new connection: {e}" 11:54:50 ) from e 11:54:50 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 > resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:54:50 retries = retries.increment( 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4' 11:54:50 response = None 11:54:50 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 11:54:50 _pool = 11:54:50 _stacktrace = 11:54:50 11:54:50 def increment( 11:54:50 self, 11:54:50 method: str | None = None, 11:54:50 url: str | None = None, 11:54:50 response: BaseHTTPResponse | None = None, 11:54:50 error: Exception | None = None, 11:54:50 _pool: ConnectionPool | None = None, 11:54:50 _stacktrace: TracebackType | None = None, 11:54:50 ) -> Self: 11:54:50 """Return a new Retry object with incremented retry counters. 11:54:50 11:54:50 :param response: A response object, or None, if the server did not 11:54:50 return a response. 11:54:50 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:54:50 :param Exception error: An error encountered during the request, or 11:54:50 None if the response was received successfully. 11:54:50 11:54:50 :return: A new ``Retry`` object. 11:54:50 """ 11:54:50 if self.total is False and error: 11:54:50 # Disabled, indicate to re-raise the error. 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 11:54:50 total = self.total 11:54:50 if total is not None: 11:54:50 total -= 1 11:54:50 11:54:50 connect = self.connect 11:54:50 read = self.read 11:54:50 redirect = self.redirect 11:54:50 status_count = self.status 11:54:50 other = self.other 11:54:50 cause = "unknown" 11:54:50 status = None 11:54:50 redirect_location = None 11:54:50 11:54:50 if error and self._is_connection_error(error): 11:54:50 # Connect retry? 11:54:50 if connect is False: 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif connect is not None: 11:54:50 connect -= 1 11:54:50 11:54:50 elif error and self._is_read_error(error): 11:54:50 # Read retry? 11:54:50 if read is False or method is None or not self._is_method_retryable(method): 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif read is not None: 11:54:50 read -= 1 11:54:50 11:54:50 elif error: 11:54:50 # Other retry? 11:54:50 if other is not None: 11:54:50 other -= 1 11:54:50 11:54:50 elif response and response.get_redirect_location(): 11:54:50 # Redirect retry? 11:54:50 if redirect is not None: 11:54:50 redirect -= 1 11:54:50 cause = "too many redirects" 11:54:50 response_redirect_location = response.get_redirect_location() 11:54:50 if response_redirect_location: 11:54:50 redirect_location = response_redirect_location 11:54:50 status = response.status 11:54:50 11:54:50 else: 11:54:50 # Incrementing because of a server error like a 500 in 11:54:50 # status_forcelist and the given method is in the allowed_methods 11:54:50 cause = ResponseError.GENERIC_ERROR 11:54:50 if response and response.status: 11:54:50 if status_count is not None: 11:54:50 status_count -= 1 11:54:50 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:54:50 status = response.status 11:54:50 11:54:50 history = self.history + ( 11:54:50 RequestHistory(method, url, error, status, redirect_location), 11:54:50 ) 11:54:50 11:54:50 new_retry = self.new( 11:54:50 total=total, 11:54:50 connect=connect, 11:54:50 read=read, 11:54:50 redirect=redirect, 11:54:50 status=status_count, 11:54:50 other=other, 11:54:50 history=history, 11:54:50 ) 11:54:50 11:54:50 if new_retry.is_exhausted(): 11:54:50 reason = error or ResponseError(cause) 11:54:50 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 11:54:50 11:54:50 During handling of the above exception, another exception occurred: 11:54:50 11:54:50 self = 11:54:50 11:54:50 def test_15_xpdr_portmapping_CLIENT4(self): 11:54:50 > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT4") 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 11:54:50 transportpce_tests/1.2.1/test01_portmapping.py:182: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 11:54:50 response = get_request(target_url) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 transportpce_tests/common/test_utils.py:117: in get_request 11:54:50 return requests.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 11:54:50 return session.request(method=method, url=url, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:54:50 resp = self.send(prep, **send_kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:54:50 r = adapter.send(request, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 except (ProtocolError, OSError) as err: 11:54:50 raise ConnectionError(err, request=request) 11:54:50 11:54:50 except MaxRetryError as e: 11:54:50 if isinstance(e.reason, ConnectTimeoutError): 11:54:50 # TODO: Remove this in 3.0.0: see #2811 11:54:50 if not isinstance(e.reason, NewConnectionError): 11:54:50 raise ConnectTimeout(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, ResponseError): 11:54:50 raise RetryError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _ProxyError): 11:54:50 raise ProxyError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _SSLError): 11:54:50 # This branch is for urllib3 v1.22 and later. 11:54:50 raise SSLError(e, request=request) 11:54:50 11:54:50 > raise ConnectionError(e, request=request) 11:54:50 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 11:54:50 ----------------------------- Captured stdout call ----------------------------- 11:54:50 execution of test_15_xpdr_portmapping_CLIENT4 11:54:50 ________ TestTransportPCEPortmapping.test_16_xpdr_device_disconnection _________ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 > sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:54:50 raise err 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 address = ('localhost', 8191), timeout = 30, source_address = None 11:54:50 socket_options = [(6, 1, 1)] 11:54:50 11:54:50 def create_connection( 11:54:50 address: tuple[str, int], 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 source_address: tuple[str, int] | None = None, 11:54:50 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:54:50 ) -> socket.socket: 11:54:50 """Connect to *address* and return the socket object. 11:54:50 11:54:50 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:54:50 port)``) and return the socket object. Passing the optional 11:54:50 *timeout* parameter will set the timeout on the socket instance 11:54:50 before attempting to connect. If no *timeout* is supplied, the 11:54:50 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:54:50 is used. If *source_address* is set it must be a tuple of (host, port) 11:54:50 for the socket to bind as a source address before making the connection. 11:54:50 An host of '' or port 0 tells the OS to use the default. 11:54:50 """ 11:54:50 11:54:50 host, port = address 11:54:50 if host.startswith("["): 11:54:50 host = host.strip("[]") 11:54:50 err = None 11:54:50 11:54:50 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:54:50 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:54:50 # The original create_connection function always returns all records. 11:54:50 family = allowed_gai_family() 11:54:50 11:54:50 try: 11:54:50 host.encode("idna") 11:54:50 except UnicodeError: 11:54:50 raise LocationParseError(f"'{host}', label empty or too long") from None 11:54:50 11:54:50 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:54:50 af, socktype, proto, canonname, sa = res 11:54:50 sock = None 11:54:50 try: 11:54:50 sock = socket.socket(af, socktype, proto) 11:54:50 11:54:50 # If provided, set socket level options before connecting. 11:54:50 _set_socket_options(sock, socket_options) 11:54:50 11:54:50 if timeout is not _DEFAULT_TIMEOUT: 11:54:50 sock.settimeout(timeout) 11:54:50 if source_address: 11:54:50 sock.bind(source_address) 11:54:50 > sock.connect(sa) 11:54:50 E ConnectionRefusedError: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 method = 'DELETE' 11:54:50 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01' 11:54:50 body = None 11:54:50 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:54:50 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 redirect = False, assert_same_host = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:54:50 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:54:50 decode_content = False, response_kw = {} 11:54:50 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01', query=None, fragment=None) 11:54:50 destination_scheme = None, conn = None, release_this_conn = True 11:54:50 http_tunnel_required = False, err = None, clean_exit = False 11:54:50 11:54:50 def urlopen( # type: ignore[override] 11:54:50 self, 11:54:50 method: str, 11:54:50 url: str, 11:54:50 body: _TYPE_BODY | None = None, 11:54:50 headers: typing.Mapping[str, str] | None = None, 11:54:50 retries: Retry | bool | int | None = None, 11:54:50 redirect: bool = True, 11:54:50 assert_same_host: bool = True, 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 pool_timeout: int | None = None, 11:54:50 release_conn: bool | None = None, 11:54:50 chunked: bool = False, 11:54:50 body_pos: _TYPE_BODY_POSITION | None = None, 11:54:50 preload_content: bool = True, 11:54:50 decode_content: bool = True, 11:54:50 **response_kw: typing.Any, 11:54:50 ) -> BaseHTTPResponse: 11:54:50 """ 11:54:50 Get a connection from the pool and perform an HTTP request. This is the 11:54:50 lowest level call for making a request, so you'll need to specify all 11:54:50 the raw details. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 More commonly, it's appropriate to use a convenience method 11:54:50 such as :meth:`request`. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 `release_conn` will only behave as expected if 11:54:50 `preload_content=False` because we want to make 11:54:50 `preload_content=False` the default behaviour someday soon without 11:54:50 breaking backwards compatibility. 11:54:50 11:54:50 :param method: 11:54:50 HTTP request method (such as GET, POST, PUT, etc.) 11:54:50 11:54:50 :param url: 11:54:50 The URL to perform the request on. 11:54:50 11:54:50 :param body: 11:54:50 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:54:50 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:54:50 11:54:50 :param headers: 11:54:50 Dictionary of custom headers to send, such as User-Agent, 11:54:50 If-None-Match, etc. If None, pool headers are used. If provided, 11:54:50 these headers completely replace any pool-specific headers. 11:54:50 11:54:50 :param retries: 11:54:50 Configure the number of retries to allow before raising a 11:54:50 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:54:50 11:54:50 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:54:50 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:54:50 over different types of retries. 11:54:50 Pass an integer number to retry connection errors that many times, 11:54:50 but no other types of errors. Pass zero to never retry. 11:54:50 11:54:50 If ``False``, then retries are disabled and any exception is raised 11:54:50 immediately. Also, instead of raising a MaxRetryError on redirects, 11:54:50 the redirect response will be returned. 11:54:50 11:54:50 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:54:50 11:54:50 :param redirect: 11:54:50 If True, automatically handle redirects (status codes 301, 302, 11:54:50 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:54:50 will disable redirect, too. 11:54:50 11:54:50 :param assert_same_host: 11:54:50 If ``True``, will make sure that the host of the pool requests is 11:54:50 consistent else will raise HostChangedError. When ``False``, you can 11:54:50 use the pool on an HTTP proxy and request foreign hosts. 11:54:50 11:54:50 :param timeout: 11:54:50 If specified, overrides the default timeout for this one 11:54:50 request. It may be a float (in seconds) or an instance of 11:54:50 :class:`urllib3.util.Timeout`. 11:54:50 11:54:50 :param pool_timeout: 11:54:50 If set and the pool is set to block=True, then this method will 11:54:50 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:54:50 connection is available within the time period. 11:54:50 11:54:50 :param bool preload_content: 11:54:50 If True, the response's body will be preloaded into memory. 11:54:50 11:54:50 :param bool decode_content: 11:54:50 If True, will attempt to decode the body based on the 11:54:50 'content-encoding' header. 11:54:50 11:54:50 :param release_conn: 11:54:50 If False, then the urlopen call will not release the connection 11:54:50 back into the pool once a response is received (but will release if 11:54:50 you read the entire contents of the response such as when 11:54:50 `preload_content=True`). This is useful if you're not preloading 11:54:50 the response's content immediately. You will need to call 11:54:50 ``r.release_conn()`` on the response ``r`` to return the connection 11:54:50 back into the pool. If None, it takes the value of ``preload_content`` 11:54:50 which defaults to ``True``. 11:54:50 11:54:50 :param bool chunked: 11:54:50 If True, urllib3 will send the body using chunked transfer 11:54:50 encoding. Otherwise, urllib3 will send the body using the standard 11:54:50 content-length form. Defaults to False. 11:54:50 11:54:50 :param int body_pos: 11:54:50 Position to seek to in file-like body in the event of a retry or 11:54:50 redirect. Typically this won't need to be set because urllib3 will 11:54:50 auto-populate the value when needed. 11:54:50 """ 11:54:50 parsed_url = parse_url(url) 11:54:50 destination_scheme = parsed_url.scheme 11:54:50 11:54:50 if headers is None: 11:54:50 headers = self.headers 11:54:50 11:54:50 if not isinstance(retries, Retry): 11:54:50 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:54:50 11:54:50 if release_conn is None: 11:54:50 release_conn = preload_content 11:54:50 11:54:50 # Check host 11:54:50 if assert_same_host and not self.is_same_host(url): 11:54:50 raise HostChangedError(self, url, retries) 11:54:50 11:54:50 # Ensure that the URL we're connecting to is properly encoded 11:54:50 if url.startswith("/"): 11:54:50 url = to_str(_encode_target(url)) 11:54:50 else: 11:54:50 url = to_str(parsed_url.url) 11:54:50 11:54:50 conn = None 11:54:50 11:54:50 # Track whether `conn` needs to be released before 11:54:50 # returning/raising/recursing. Update this variable if necessary, and 11:54:50 # leave `release_conn` constant throughout the function. That way, if 11:54:50 # the function recurses, the original value of `release_conn` will be 11:54:50 # passed down into the recursive call, and its value will be respected. 11:54:50 # 11:54:50 # See issue #651 [1] for details. 11:54:50 # 11:54:50 # [1] 11:54:50 release_this_conn = release_conn 11:54:50 11:54:50 http_tunnel_required = connection_requires_http_tunnel( 11:54:50 self.proxy, self.proxy_config, destination_scheme 11:54:50 ) 11:54:50 11:54:50 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:54:50 # have to copy the headers dict so we can safely change it without those 11:54:50 # changes being reflected in anyone else's copy. 11:54:50 if not http_tunnel_required: 11:54:50 headers = headers.copy() # type: ignore[attr-defined] 11:54:50 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:54:50 11:54:50 # Must keep the exception bound to a separate variable or else Python 3 11:54:50 # complains about UnboundLocalError. 11:54:50 err = None 11:54:50 11:54:50 # Keep track of whether we cleanly exited the except block. This 11:54:50 # ensures we do proper cleanup in finally. 11:54:50 clean_exit = False 11:54:50 11:54:50 # Rewind body position, if needed. Record current position 11:54:50 # for future rewinds in the event of a redirect/retry. 11:54:50 body_pos = set_file_position(body, body_pos) 11:54:50 11:54:50 try: 11:54:50 # Request a connection from the queue. 11:54:50 timeout_obj = self._get_timeout(timeout) 11:54:50 conn = self._get_conn(timeout=pool_timeout) 11:54:50 11:54:50 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:54:50 11:54:50 # Is this a closed/new connection that requires CONNECT tunnelling? 11:54:50 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:54:50 try: 11:54:50 self._prepare_proxy(conn) 11:54:50 except (BaseSSLError, OSError, SocketTimeout) as e: 11:54:50 self._raise_timeout( 11:54:50 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:54:50 ) 11:54:50 raise 11:54:50 11:54:50 # If we're going to release the connection in ``finally:``, then 11:54:50 # the response doesn't need to know about the connection. Otherwise 11:54:50 # it will also try to release it and we'll have a double-release 11:54:50 # mess. 11:54:50 response_conn = conn if not release_conn else None 11:54:50 11:54:50 # Make the request on the HTTPConnection object 11:54:50 > response = self._make_request( 11:54:50 conn, 11:54:50 method, 11:54:50 url, 11:54:50 timeout=timeout_obj, 11:54:50 body=body, 11:54:50 headers=headers, 11:54:50 chunked=chunked, 11:54:50 retries=retries, 11:54:50 response_conn=response_conn, 11:54:50 preload_content=preload_content, 11:54:50 decode_content=decode_content, 11:54:50 **response_kw, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:54:50 conn.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 11:54:50 self.endheaders() 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:54:50 self._send_output(message_body, encode_chunked=encode_chunked) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:54:50 self.send(msg) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:54:50 self.connect() 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 11:54:50 self.sock = self._new_conn() 11:54:50 ^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 except socket.gaierror as e: 11:54:50 raise NameResolutionError(self.host, self, e) from e 11:54:50 except SocketTimeout as e: 11:54:50 raise ConnectTimeoutError( 11:54:50 self, 11:54:50 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:54:50 ) from e 11:54:50 11:54:50 except OSError as e: 11:54:50 > raise NewConnectionError( 11:54:50 self, f"Failed to establish a new connection: {e}" 11:54:50 ) from e 11:54:50 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 > resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:54:50 retries = retries.increment( 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 method = 'DELETE' 11:54:50 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01' 11:54:50 response = None 11:54:50 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 11:54:50 _pool = 11:54:50 _stacktrace = 11:54:50 11:54:50 def increment( 11:54:50 self, 11:54:50 method: str | None = None, 11:54:50 url: str | None = None, 11:54:50 response: BaseHTTPResponse | None = None, 11:54:50 error: Exception | None = None, 11:54:50 _pool: ConnectionPool | None = None, 11:54:50 _stacktrace: TracebackType | None = None, 11:54:50 ) -> Self: 11:54:50 """Return a new Retry object with incremented retry counters. 11:54:50 11:54:50 :param response: A response object, or None, if the server did not 11:54:50 return a response. 11:54:50 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:54:50 :param Exception error: An error encountered during the request, or 11:54:50 None if the response was received successfully. 11:54:50 11:54:50 :return: A new ``Retry`` object. 11:54:50 """ 11:54:50 if self.total is False and error: 11:54:50 # Disabled, indicate to re-raise the error. 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 11:54:50 total = self.total 11:54:50 if total is not None: 11:54:50 total -= 1 11:54:50 11:54:50 connect = self.connect 11:54:50 read = self.read 11:54:50 redirect = self.redirect 11:54:50 status_count = self.status 11:54:50 other = self.other 11:54:50 cause = "unknown" 11:54:50 status = None 11:54:50 redirect_location = None 11:54:50 11:54:50 if error and self._is_connection_error(error): 11:54:50 # Connect retry? 11:54:50 if connect is False: 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif connect is not None: 11:54:50 connect -= 1 11:54:50 11:54:50 elif error and self._is_read_error(error): 11:54:50 # Read retry? 11:54:50 if read is False or method is None or not self._is_method_retryable(method): 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif read is not None: 11:54:50 read -= 1 11:54:50 11:54:50 elif error: 11:54:50 # Other retry? 11:54:50 if other is not None: 11:54:50 other -= 1 11:54:50 11:54:50 elif response and response.get_redirect_location(): 11:54:50 # Redirect retry? 11:54:50 if redirect is not None: 11:54:50 redirect -= 1 11:54:50 cause = "too many redirects" 11:54:50 response_redirect_location = response.get_redirect_location() 11:54:50 if response_redirect_location: 11:54:50 redirect_location = response_redirect_location 11:54:50 status = response.status 11:54:50 11:54:50 else: 11:54:50 # Incrementing because of a server error like a 500 in 11:54:50 # status_forcelist and the given method is in the allowed_methods 11:54:50 cause = ResponseError.GENERIC_ERROR 11:54:50 if response and response.status: 11:54:50 if status_count is not None: 11:54:50 status_count -= 1 11:54:50 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:54:50 status = response.status 11:54:50 11:54:50 history = self.history + ( 11:54:50 RequestHistory(method, url, error, status, redirect_location), 11:54:50 ) 11:54:50 11:54:50 new_retry = self.new( 11:54:50 total=total, 11:54:50 connect=connect, 11:54:50 read=read, 11:54:50 redirect=redirect, 11:54:50 status=status_count, 11:54:50 other=other, 11:54:50 history=history, 11:54:50 ) 11:54:50 11:54:50 if new_retry.is_exhausted(): 11:54:50 reason = error or ResponseError(cause) 11:54:50 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 11:54:50 11:54:50 During handling of the above exception, another exception occurred: 11:54:50 11:54:50 self = 11:54:50 11:54:50 def test_16_xpdr_device_disconnection(self): 11:54:50 > response = test_utils.unmount_device("XPDRA01") 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 11:54:50 transportpce_tests/1.2.1/test01_portmapping.py:193: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 transportpce_tests/common/test_utils.py:398: in unmount_device 11:54:50 response = delete_request(url[RESTCONF_VERSION].format('{}', node)) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 transportpce_tests/common/test_utils.py:134: in delete_request 11:54:50 return requests.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 11:54:50 return session.request(method=method, url=url, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:54:50 resp = self.send(prep, **send_kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:54:50 r = adapter.send(request, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 except (ProtocolError, OSError) as err: 11:54:50 raise ConnectionError(err, request=request) 11:54:50 11:54:50 except MaxRetryError as e: 11:54:50 if isinstance(e.reason, ConnectTimeoutError): 11:54:50 # TODO: Remove this in 3.0.0: see #2811 11:54:50 if not isinstance(e.reason, NewConnectionError): 11:54:50 raise ConnectTimeout(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, ResponseError): 11:54:50 raise RetryError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _ProxyError): 11:54:50 raise ProxyError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _SSLError): 11:54:50 # This branch is for urllib3 v1.22 and later. 11:54:50 raise SSLError(e, request=request) 11:54:50 11:54:50 > raise ConnectionError(e, request=request) 11:54:50 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 11:54:50 ----------------------------- Captured stdout call ----------------------------- 11:54:50 execution of test_16_xpdr_device_disconnection 11:54:50 _________ TestTransportPCEPortmapping.test_17_xpdr_device_disconnected _________ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 > sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:54:50 raise err 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 address = ('localhost', 8191), timeout = 30, source_address = None 11:54:50 socket_options = [(6, 1, 1)] 11:54:50 11:54:50 def create_connection( 11:54:50 address: tuple[str, int], 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 source_address: tuple[str, int] | None = None, 11:54:50 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:54:50 ) -> socket.socket: 11:54:50 """Connect to *address* and return the socket object. 11:54:50 11:54:50 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:54:50 port)``) and return the socket object. Passing the optional 11:54:50 *timeout* parameter will set the timeout on the socket instance 11:54:50 before attempting to connect. If no *timeout* is supplied, the 11:54:50 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:54:50 is used. If *source_address* is set it must be a tuple of (host, port) 11:54:50 for the socket to bind as a source address before making the connection. 11:54:50 An host of '' or port 0 tells the OS to use the default. 11:54:50 """ 11:54:50 11:54:50 host, port = address 11:54:50 if host.startswith("["): 11:54:50 host = host.strip("[]") 11:54:50 err = None 11:54:50 11:54:50 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:54:50 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:54:50 # The original create_connection function always returns all records. 11:54:50 family = allowed_gai_family() 11:54:50 11:54:50 try: 11:54:50 host.encode("idna") 11:54:50 except UnicodeError: 11:54:50 raise LocationParseError(f"'{host}', label empty or too long") from None 11:54:50 11:54:50 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:54:50 af, socktype, proto, canonname, sa = res 11:54:50 sock = None 11:54:50 try: 11:54:50 sock = socket.socket(af, socktype, proto) 11:54:50 11:54:50 # If provided, set socket level options before connecting. 11:54:50 _set_socket_options(sock, socket_options) 11:54:50 11:54:50 if timeout is not _DEFAULT_TIMEOUT: 11:54:50 sock.settimeout(timeout) 11:54:50 if source_address: 11:54:50 sock.bind(source_address) 11:54:50 > sock.connect(sa) 11:54:50 E ConnectionRefusedError: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig' 11:54:50 body = None 11:54:50 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:54:50 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 redirect = False, assert_same_host = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:54:50 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:54:50 decode_content = False, response_kw = {} 11:54:50 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01', query='content=nonconfig', fragment=None) 11:54:50 destination_scheme = None, conn = None, release_this_conn = True 11:54:50 http_tunnel_required = False, err = None, clean_exit = False 11:54:50 11:54:50 def urlopen( # type: ignore[override] 11:54:50 self, 11:54:50 method: str, 11:54:50 url: str, 11:54:50 body: _TYPE_BODY | None = None, 11:54:50 headers: typing.Mapping[str, str] | None = None, 11:54:50 retries: Retry | bool | int | None = None, 11:54:50 redirect: bool = True, 11:54:50 assert_same_host: bool = True, 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 pool_timeout: int | None = None, 11:54:50 release_conn: bool | None = None, 11:54:50 chunked: bool = False, 11:54:50 body_pos: _TYPE_BODY_POSITION | None = None, 11:54:50 preload_content: bool = True, 11:54:50 decode_content: bool = True, 11:54:50 **response_kw: typing.Any, 11:54:50 ) -> BaseHTTPResponse: 11:54:50 """ 11:54:50 Get a connection from the pool and perform an HTTP request. This is the 11:54:50 lowest level call for making a request, so you'll need to specify all 11:54:50 the raw details. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 More commonly, it's appropriate to use a convenience method 11:54:50 such as :meth:`request`. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 `release_conn` will only behave as expected if 11:54:50 `preload_content=False` because we want to make 11:54:50 `preload_content=False` the default behaviour someday soon without 11:54:50 breaking backwards compatibility. 11:54:50 11:54:50 :param method: 11:54:50 HTTP request method (such as GET, POST, PUT, etc.) 11:54:50 11:54:50 :param url: 11:54:50 The URL to perform the request on. 11:54:50 11:54:50 :param body: 11:54:50 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:54:50 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:54:50 11:54:50 :param headers: 11:54:50 Dictionary of custom headers to send, such as User-Agent, 11:54:50 If-None-Match, etc. If None, pool headers are used. If provided, 11:54:50 these headers completely replace any pool-specific headers. 11:54:50 11:54:50 :param retries: 11:54:50 Configure the number of retries to allow before raising a 11:54:50 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:54:50 11:54:50 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:54:50 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:54:50 over different types of retries. 11:54:50 Pass an integer number to retry connection errors that many times, 11:54:50 but no other types of errors. Pass zero to never retry. 11:54:50 11:54:50 If ``False``, then retries are disabled and any exception is raised 11:54:50 immediately. Also, instead of raising a MaxRetryError on redirects, 11:54:50 the redirect response will be returned. 11:54:50 11:54:50 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:54:50 11:54:50 :param redirect: 11:54:50 If True, automatically handle redirects (status codes 301, 302, 11:54:50 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:54:50 will disable redirect, too. 11:54:50 11:54:50 :param assert_same_host: 11:54:50 If ``True``, will make sure that the host of the pool requests is 11:54:50 consistent else will raise HostChangedError. When ``False``, you can 11:54:50 use the pool on an HTTP proxy and request foreign hosts. 11:54:50 11:54:50 :param timeout: 11:54:50 If specified, overrides the default timeout for this one 11:54:50 request. It may be a float (in seconds) or an instance of 11:54:50 :class:`urllib3.util.Timeout`. 11:54:50 11:54:50 :param pool_timeout: 11:54:50 If set and the pool is set to block=True, then this method will 11:54:50 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:54:50 connection is available within the time period. 11:54:50 11:54:50 :param bool preload_content: 11:54:50 If True, the response's body will be preloaded into memory. 11:54:50 11:54:50 :param bool decode_content: 11:54:50 If True, will attempt to decode the body based on the 11:54:50 'content-encoding' header. 11:54:50 11:54:50 :param release_conn: 11:54:50 If False, then the urlopen call will not release the connection 11:54:50 back into the pool once a response is received (but will release if 11:54:50 you read the entire contents of the response such as when 11:54:50 `preload_content=True`). This is useful if you're not preloading 11:54:50 the response's content immediately. You will need to call 11:54:50 ``r.release_conn()`` on the response ``r`` to return the connection 11:54:50 back into the pool. If None, it takes the value of ``preload_content`` 11:54:50 which defaults to ``True``. 11:54:50 11:54:50 :param bool chunked: 11:54:50 If True, urllib3 will send the body using chunked transfer 11:54:50 encoding. Otherwise, urllib3 will send the body using the standard 11:54:50 content-length form. Defaults to False. 11:54:50 11:54:50 :param int body_pos: 11:54:50 Position to seek to in file-like body in the event of a retry or 11:54:50 redirect. Typically this won't need to be set because urllib3 will 11:54:50 auto-populate the value when needed. 11:54:50 """ 11:54:50 parsed_url = parse_url(url) 11:54:50 destination_scheme = parsed_url.scheme 11:54:50 11:54:50 if headers is None: 11:54:50 headers = self.headers 11:54:50 11:54:50 if not isinstance(retries, Retry): 11:54:50 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:54:50 11:54:50 if release_conn is None: 11:54:50 release_conn = preload_content 11:54:50 11:54:50 # Check host 11:54:50 if assert_same_host and not self.is_same_host(url): 11:54:50 raise HostChangedError(self, url, retries) 11:54:50 11:54:50 # Ensure that the URL we're connecting to is properly encoded 11:54:50 if url.startswith("/"): 11:54:50 url = to_str(_encode_target(url)) 11:54:50 else: 11:54:50 url = to_str(parsed_url.url) 11:54:50 11:54:50 conn = None 11:54:50 11:54:50 # Track whether `conn` needs to be released before 11:54:50 # returning/raising/recursing. Update this variable if necessary, and 11:54:50 # leave `release_conn` constant throughout the function. That way, if 11:54:50 # the function recurses, the original value of `release_conn` will be 11:54:50 # passed down into the recursive call, and its value will be respected. 11:54:50 # 11:54:50 # See issue #651 [1] for details. 11:54:50 # 11:54:50 # [1] 11:54:50 release_this_conn = release_conn 11:54:50 11:54:50 http_tunnel_required = connection_requires_http_tunnel( 11:54:50 self.proxy, self.proxy_config, destination_scheme 11:54:50 ) 11:54:50 11:54:50 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:54:50 # have to copy the headers dict so we can safely change it without those 11:54:50 # changes being reflected in anyone else's copy. 11:54:50 if not http_tunnel_required: 11:54:50 headers = headers.copy() # type: ignore[attr-defined] 11:54:50 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:54:50 11:54:50 # Must keep the exception bound to a separate variable or else Python 3 11:54:50 # complains about UnboundLocalError. 11:54:50 err = None 11:54:50 11:54:50 # Keep track of whether we cleanly exited the except block. This 11:54:50 # ensures we do proper cleanup in finally. 11:54:50 clean_exit = False 11:54:50 11:54:50 # Rewind body position, if needed. Record current position 11:54:50 # for future rewinds in the event of a redirect/retry. 11:54:50 body_pos = set_file_position(body, body_pos) 11:54:50 11:54:50 try: 11:54:50 # Request a connection from the queue. 11:54:50 timeout_obj = self._get_timeout(timeout) 11:54:50 conn = self._get_conn(timeout=pool_timeout) 11:54:50 11:54:50 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:54:50 11:54:50 # Is this a closed/new connection that requires CONNECT tunnelling? 11:54:50 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:54:50 try: 11:54:50 self._prepare_proxy(conn) 11:54:50 except (BaseSSLError, OSError, SocketTimeout) as e: 11:54:50 self._raise_timeout( 11:54:50 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:54:50 ) 11:54:50 raise 11:54:50 11:54:50 # If we're going to release the connection in ``finally:``, then 11:54:50 # the response doesn't need to know about the connection. Otherwise 11:54:50 # it will also try to release it and we'll have a double-release 11:54:50 # mess. 11:54:50 response_conn = conn if not release_conn else None 11:54:50 11:54:50 # Make the request on the HTTPConnection object 11:54:50 > response = self._make_request( 11:54:50 conn, 11:54:50 method, 11:54:50 url, 11:54:50 timeout=timeout_obj, 11:54:50 body=body, 11:54:50 headers=headers, 11:54:50 chunked=chunked, 11:54:50 retries=retries, 11:54:50 response_conn=response_conn, 11:54:50 preload_content=preload_content, 11:54:50 decode_content=decode_content, 11:54:50 **response_kw, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:54:50 conn.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 11:54:50 self.endheaders() 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:54:50 self._send_output(message_body, encode_chunked=encode_chunked) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:54:50 self.send(msg) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:54:50 self.connect() 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 11:54:50 self.sock = self._new_conn() 11:54:50 ^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 except socket.gaierror as e: 11:54:50 raise NameResolutionError(self.host, self, e) from e 11:54:50 except SocketTimeout as e: 11:54:50 raise ConnectTimeoutError( 11:54:50 self, 11:54:50 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:54:50 ) from e 11:54:50 11:54:50 except OSError as e: 11:54:50 > raise NewConnectionError( 11:54:50 self, f"Failed to establish a new connection: {e}" 11:54:50 ) from e 11:54:50 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 > resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:54:50 retries = retries.increment( 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig' 11:54:50 response = None 11:54:50 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 11:54:50 _pool = 11:54:50 _stacktrace = 11:54:50 11:54:50 def increment( 11:54:50 self, 11:54:50 method: str | None = None, 11:54:50 url: str | None = None, 11:54:50 response: BaseHTTPResponse | None = None, 11:54:50 error: Exception | None = None, 11:54:50 _pool: ConnectionPool | None = None, 11:54:50 _stacktrace: TracebackType | None = None, 11:54:50 ) -> Self: 11:54:50 """Return a new Retry object with incremented retry counters. 11:54:50 11:54:50 :param response: A response object, or None, if the server did not 11:54:50 return a response. 11:54:50 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:54:50 :param Exception error: An error encountered during the request, or 11:54:50 None if the response was received successfully. 11:54:50 11:54:50 :return: A new ``Retry`` object. 11:54:50 """ 11:54:50 if self.total is False and error: 11:54:50 # Disabled, indicate to re-raise the error. 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 11:54:50 total = self.total 11:54:50 if total is not None: 11:54:50 total -= 1 11:54:50 11:54:50 connect = self.connect 11:54:50 read = self.read 11:54:50 redirect = self.redirect 11:54:50 status_count = self.status 11:54:50 other = self.other 11:54:50 cause = "unknown" 11:54:50 status = None 11:54:50 redirect_location = None 11:54:50 11:54:50 if error and self._is_connection_error(error): 11:54:50 # Connect retry? 11:54:50 if connect is False: 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif connect is not None: 11:54:50 connect -= 1 11:54:50 11:54:50 elif error and self._is_read_error(error): 11:54:50 # Read retry? 11:54:50 if read is False or method is None or not self._is_method_retryable(method): 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif read is not None: 11:54:50 read -= 1 11:54:50 11:54:50 elif error: 11:54:50 # Other retry? 11:54:50 if other is not None: 11:54:50 other -= 1 11:54:50 11:54:50 elif response and response.get_redirect_location(): 11:54:50 # Redirect retry? 11:54:50 if redirect is not None: 11:54:50 redirect -= 1 11:54:50 cause = "too many redirects" 11:54:50 response_redirect_location = response.get_redirect_location() 11:54:50 if response_redirect_location: 11:54:50 redirect_location = response_redirect_location 11:54:50 status = response.status 11:54:50 11:54:50 else: 11:54:50 # Incrementing because of a server error like a 500 in 11:54:50 # status_forcelist and the given method is in the allowed_methods 11:54:50 cause = ResponseError.GENERIC_ERROR 11:54:50 if response and response.status: 11:54:50 if status_count is not None: 11:54:50 status_count -= 1 11:54:50 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:54:50 status = response.status 11:54:50 11:54:50 history = self.history + ( 11:54:50 RequestHistory(method, url, error, status, redirect_location), 11:54:50 ) 11:54:50 11:54:50 new_retry = self.new( 11:54:50 total=total, 11:54:50 connect=connect, 11:54:50 read=read, 11:54:50 redirect=redirect, 11:54:50 status=status_count, 11:54:50 other=other, 11:54:50 history=history, 11:54:50 ) 11:54:50 11:54:50 if new_retry.is_exhausted(): 11:54:50 reason = error or ResponseError(cause) 11:54:50 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 11:54:50 11:54:50 During handling of the above exception, another exception occurred: 11:54:50 11:54:50 self = 11:54:50 11:54:50 def test_17_xpdr_device_disconnected(self): 11:54:50 > response = test_utils.check_device_connection("XPDRA01") 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 11:54:50 transportpce_tests/1.2.1/test01_portmapping.py:197: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 transportpce_tests/common/test_utils.py:409: in check_device_connection 11:54:50 response = get_request(url[RESTCONF_VERSION].format('{}', node)) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 transportpce_tests/common/test_utils.py:117: in get_request 11:54:50 return requests.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 11:54:50 return session.request(method=method, url=url, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:54:50 resp = self.send(prep, **send_kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:54:50 r = adapter.send(request, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 except (ProtocolError, OSError) as err: 11:54:50 raise ConnectionError(err, request=request) 11:54:50 11:54:50 except MaxRetryError as e: 11:54:50 if isinstance(e.reason, ConnectTimeoutError): 11:54:50 # TODO: Remove this in 3.0.0: see #2811 11:54:50 if not isinstance(e.reason, NewConnectionError): 11:54:50 raise ConnectTimeout(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, ResponseError): 11:54:50 raise RetryError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _ProxyError): 11:54:50 raise ProxyError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _SSLError): 11:54:50 # This branch is for urllib3 v1.22 and later. 11:54:50 raise SSLError(e, request=request) 11:54:50 11:54:50 > raise ConnectionError(e, request=request) 11:54:50 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 11:54:50 ----------------------------- Captured stdout call ----------------------------- 11:54:50 execution of test_17_xpdr_device_disconnected 11:54:50 ________ TestTransportPCEPortmapping.test_18_xpdr_device_not_connected _________ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 > sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:54:50 raise err 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 address = ('localhost', 8191), timeout = 30, source_address = None 11:54:50 socket_options = [(6, 1, 1)] 11:54:50 11:54:50 def create_connection( 11:54:50 address: tuple[str, int], 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 source_address: tuple[str, int] | None = None, 11:54:50 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:54:50 ) -> socket.socket: 11:54:50 """Connect to *address* and return the socket object. 11:54:50 11:54:50 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:54:50 port)``) and return the socket object. Passing the optional 11:54:50 *timeout* parameter will set the timeout on the socket instance 11:54:50 before attempting to connect. If no *timeout* is supplied, the 11:54:50 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:54:50 is used. If *source_address* is set it must be a tuple of (host, port) 11:54:50 for the socket to bind as a source address before making the connection. 11:54:50 An host of '' or port 0 tells the OS to use the default. 11:54:50 """ 11:54:50 11:54:50 host, port = address 11:54:50 if host.startswith("["): 11:54:50 host = host.strip("[]") 11:54:50 err = None 11:54:50 11:54:50 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:54:50 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:54:50 # The original create_connection function always returns all records. 11:54:50 family = allowed_gai_family() 11:54:50 11:54:50 try: 11:54:50 host.encode("idna") 11:54:50 except UnicodeError: 11:54:50 raise LocationParseError(f"'{host}', label empty or too long") from None 11:54:50 11:54:50 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:54:50 af, socktype, proto, canonname, sa = res 11:54:50 sock = None 11:54:50 try: 11:54:50 sock = socket.socket(af, socktype, proto) 11:54:50 11:54:50 # If provided, set socket level options before connecting. 11:54:50 _set_socket_options(sock, socket_options) 11:54:50 11:54:50 if timeout is not _DEFAULT_TIMEOUT: 11:54:50 sock.settimeout(timeout) 11:54:50 if source_address: 11:54:50 sock.bind(source_address) 11:54:50 > sock.connect(sa) 11:54:50 E ConnectionRefusedError: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info' 11:54:50 body = None 11:54:50 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:54:50 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 redirect = False, assert_same_host = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:54:50 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:54:50 decode_content = False, response_kw = {} 11:54:50 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info', query=None, fragment=None) 11:54:50 destination_scheme = None, conn = None, release_this_conn = True 11:54:50 http_tunnel_required = False, err = None, clean_exit = False 11:54:50 11:54:50 def urlopen( # type: ignore[override] 11:54:50 self, 11:54:50 method: str, 11:54:50 url: str, 11:54:50 body: _TYPE_BODY | None = None, 11:54:50 headers: typing.Mapping[str, str] | None = None, 11:54:50 retries: Retry | bool | int | None = None, 11:54:50 redirect: bool = True, 11:54:50 assert_same_host: bool = True, 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 pool_timeout: int | None = None, 11:54:50 release_conn: bool | None = None, 11:54:50 chunked: bool = False, 11:54:50 body_pos: _TYPE_BODY_POSITION | None = None, 11:54:50 preload_content: bool = True, 11:54:50 decode_content: bool = True, 11:54:50 **response_kw: typing.Any, 11:54:50 ) -> BaseHTTPResponse: 11:54:50 """ 11:54:50 Get a connection from the pool and perform an HTTP request. This is the 11:54:50 lowest level call for making a request, so you'll need to specify all 11:54:50 the raw details. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 More commonly, it's appropriate to use a convenience method 11:54:50 such as :meth:`request`. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 `release_conn` will only behave as expected if 11:54:50 `preload_content=False` because we want to make 11:54:50 `preload_content=False` the default behaviour someday soon without 11:54:50 breaking backwards compatibility. 11:54:50 11:54:50 :param method: 11:54:50 HTTP request method (such as GET, POST, PUT, etc.) 11:54:50 11:54:50 :param url: 11:54:50 The URL to perform the request on. 11:54:50 11:54:50 :param body: 11:54:50 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:54:50 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:54:50 11:54:50 :param headers: 11:54:50 Dictionary of custom headers to send, such as User-Agent, 11:54:50 If-None-Match, etc. If None, pool headers are used. If provided, 11:54:50 these headers completely replace any pool-specific headers. 11:54:50 11:54:50 :param retries: 11:54:50 Configure the number of retries to allow before raising a 11:54:50 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:54:50 11:54:50 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:54:50 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:54:50 over different types of retries. 11:54:50 Pass an integer number to retry connection errors that many times, 11:54:50 but no other types of errors. Pass zero to never retry. 11:54:50 11:54:50 If ``False``, then retries are disabled and any exception is raised 11:54:50 immediately. Also, instead of raising a MaxRetryError on redirects, 11:54:50 the redirect response will be returned. 11:54:50 11:54:50 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:54:50 11:54:50 :param redirect: 11:54:50 If True, automatically handle redirects (status codes 301, 302, 11:54:50 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:54:50 will disable redirect, too. 11:54:50 11:54:50 :param assert_same_host: 11:54:50 If ``True``, will make sure that the host of the pool requests is 11:54:50 consistent else will raise HostChangedError. When ``False``, you can 11:54:50 use the pool on an HTTP proxy and request foreign hosts. 11:54:50 11:54:50 :param timeout: 11:54:50 If specified, overrides the default timeout for this one 11:54:50 request. It may be a float (in seconds) or an instance of 11:54:50 :class:`urllib3.util.Timeout`. 11:54:50 11:54:50 :param pool_timeout: 11:54:50 If set and the pool is set to block=True, then this method will 11:54:50 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:54:50 connection is available within the time period. 11:54:50 11:54:50 :param bool preload_content: 11:54:50 If True, the response's body will be preloaded into memory. 11:54:50 11:54:50 :param bool decode_content: 11:54:50 If True, will attempt to decode the body based on the 11:54:50 'content-encoding' header. 11:54:50 11:54:50 :param release_conn: 11:54:50 If False, then the urlopen call will not release the connection 11:54:50 back into the pool once a response is received (but will release if 11:54:50 you read the entire contents of the response such as when 11:54:50 `preload_content=True`). This is useful if you're not preloading 11:54:50 the response's content immediately. You will need to call 11:54:50 ``r.release_conn()`` on the response ``r`` to return the connection 11:54:50 back into the pool. If None, it takes the value of ``preload_content`` 11:54:50 which defaults to ``True``. 11:54:50 11:54:50 :param bool chunked: 11:54:50 If True, urllib3 will send the body using chunked transfer 11:54:50 encoding. Otherwise, urllib3 will send the body using the standard 11:54:50 content-length form. Defaults to False. 11:54:50 11:54:50 :param int body_pos: 11:54:50 Position to seek to in file-like body in the event of a retry or 11:54:50 redirect. Typically this won't need to be set because urllib3 will 11:54:50 auto-populate the value when needed. 11:54:50 """ 11:54:50 parsed_url = parse_url(url) 11:54:50 destination_scheme = parsed_url.scheme 11:54:50 11:54:50 if headers is None: 11:54:50 headers = self.headers 11:54:50 11:54:50 if not isinstance(retries, Retry): 11:54:50 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:54:50 11:54:50 if release_conn is None: 11:54:50 release_conn = preload_content 11:54:50 11:54:50 # Check host 11:54:50 if assert_same_host and not self.is_same_host(url): 11:54:50 raise HostChangedError(self, url, retries) 11:54:50 11:54:50 # Ensure that the URL we're connecting to is properly encoded 11:54:50 if url.startswith("/"): 11:54:50 url = to_str(_encode_target(url)) 11:54:50 else: 11:54:50 url = to_str(parsed_url.url) 11:54:50 11:54:50 conn = None 11:54:50 11:54:50 # Track whether `conn` needs to be released before 11:54:50 # returning/raising/recursing. Update this variable if necessary, and 11:54:50 # leave `release_conn` constant throughout the function. That way, if 11:54:50 # the function recurses, the original value of `release_conn` will be 11:54:50 # passed down into the recursive call, and its value will be respected. 11:54:50 # 11:54:50 # See issue #651 [1] for details. 11:54:50 # 11:54:50 # [1] 11:54:50 release_this_conn = release_conn 11:54:50 11:54:50 http_tunnel_required = connection_requires_http_tunnel( 11:54:50 self.proxy, self.proxy_config, destination_scheme 11:54:50 ) 11:54:50 11:54:50 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:54:50 # have to copy the headers dict so we can safely change it without those 11:54:50 # changes being reflected in anyone else's copy. 11:54:50 if not http_tunnel_required: 11:54:50 headers = headers.copy() # type: ignore[attr-defined] 11:54:50 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:54:50 11:54:50 # Must keep the exception bound to a separate variable or else Python 3 11:54:50 # complains about UnboundLocalError. 11:54:50 err = None 11:54:50 11:54:50 # Keep track of whether we cleanly exited the except block. This 11:54:50 # ensures we do proper cleanup in finally. 11:54:50 clean_exit = False 11:54:50 11:54:50 # Rewind body position, if needed. Record current position 11:54:50 # for future rewinds in the event of a redirect/retry. 11:54:50 body_pos = set_file_position(body, body_pos) 11:54:50 11:54:50 try: 11:54:50 # Request a connection from the queue. 11:54:50 timeout_obj = self._get_timeout(timeout) 11:54:50 conn = self._get_conn(timeout=pool_timeout) 11:54:50 11:54:50 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:54:50 11:54:50 # Is this a closed/new connection that requires CONNECT tunnelling? 11:54:50 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:54:50 try: 11:54:50 self._prepare_proxy(conn) 11:54:50 except (BaseSSLError, OSError, SocketTimeout) as e: 11:54:50 self._raise_timeout( 11:54:50 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:54:50 ) 11:54:50 raise 11:54:50 11:54:50 # If we're going to release the connection in ``finally:``, then 11:54:50 # the response doesn't need to know about the connection. Otherwise 11:54:50 # it will also try to release it and we'll have a double-release 11:54:50 # mess. 11:54:50 response_conn = conn if not release_conn else None 11:54:50 11:54:50 # Make the request on the HTTPConnection object 11:54:50 > response = self._make_request( 11:54:50 conn, 11:54:50 method, 11:54:50 url, 11:54:50 timeout=timeout_obj, 11:54:50 body=body, 11:54:50 headers=headers, 11:54:50 chunked=chunked, 11:54:50 retries=retries, 11:54:50 response_conn=response_conn, 11:54:50 preload_content=preload_content, 11:54:50 decode_content=decode_content, 11:54:50 **response_kw, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:54:50 conn.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 11:54:50 self.endheaders() 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:54:50 self._send_output(message_body, encode_chunked=encode_chunked) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:54:50 self.send(msg) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:54:50 self.connect() 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 11:54:50 self.sock = self._new_conn() 11:54:50 ^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 except socket.gaierror as e: 11:54:50 raise NameResolutionError(self.host, self, e) from e 11:54:50 except SocketTimeout as e: 11:54:50 raise ConnectTimeoutError( 11:54:50 self, 11:54:50 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:54:50 ) from e 11:54:50 11:54:50 except OSError as e: 11:54:50 > raise NewConnectionError( 11:54:50 self, f"Failed to establish a new connection: {e}" 11:54:50 ) from e 11:54:50 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 > resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:54:50 retries = retries.increment( 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info' 11:54:50 response = None 11:54:50 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 11:54:50 _pool = 11:54:50 _stacktrace = 11:54:50 11:54:50 def increment( 11:54:50 self, 11:54:50 method: str | None = None, 11:54:50 url: str | None = None, 11:54:50 response: BaseHTTPResponse | None = None, 11:54:50 error: Exception | None = None, 11:54:50 _pool: ConnectionPool | None = None, 11:54:50 _stacktrace: TracebackType | None = None, 11:54:50 ) -> Self: 11:54:50 """Return a new Retry object with incremented retry counters. 11:54:50 11:54:50 :param response: A response object, or None, if the server did not 11:54:50 return a response. 11:54:50 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:54:50 :param Exception error: An error encountered during the request, or 11:54:50 None if the response was received successfully. 11:54:50 11:54:50 :return: A new ``Retry`` object. 11:54:50 """ 11:54:50 if self.total is False and error: 11:54:50 # Disabled, indicate to re-raise the error. 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 11:54:50 total = self.total 11:54:50 if total is not None: 11:54:50 total -= 1 11:54:50 11:54:50 connect = self.connect 11:54:50 read = self.read 11:54:50 redirect = self.redirect 11:54:50 status_count = self.status 11:54:50 other = self.other 11:54:50 cause = "unknown" 11:54:50 status = None 11:54:50 redirect_location = None 11:54:50 11:54:50 if error and self._is_connection_error(error): 11:54:50 # Connect retry? 11:54:50 if connect is False: 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif connect is not None: 11:54:50 connect -= 1 11:54:50 11:54:50 elif error and self._is_read_error(error): 11:54:50 # Read retry? 11:54:50 if read is False or method is None or not self._is_method_retryable(method): 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif read is not None: 11:54:50 read -= 1 11:54:50 11:54:50 elif error: 11:54:50 # Other retry? 11:54:50 if other is not None: 11:54:50 other -= 1 11:54:50 11:54:50 elif response and response.get_redirect_location(): 11:54:50 # Redirect retry? 11:54:50 if redirect is not None: 11:54:50 redirect -= 1 11:54:50 cause = "too many redirects" 11:54:50 response_redirect_location = response.get_redirect_location() 11:54:50 if response_redirect_location: 11:54:50 redirect_location = response_redirect_location 11:54:50 status = response.status 11:54:50 11:54:50 else: 11:54:50 # Incrementing because of a server error like a 500 in 11:54:50 # status_forcelist and the given method is in the allowed_methods 11:54:50 cause = ResponseError.GENERIC_ERROR 11:54:50 if response and response.status: 11:54:50 if status_count is not None: 11:54:50 status_count -= 1 11:54:50 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:54:50 status = response.status 11:54:50 11:54:50 history = self.history + ( 11:54:50 RequestHistory(method, url, error, status, redirect_location), 11:54:50 ) 11:54:50 11:54:50 new_retry = self.new( 11:54:50 total=total, 11:54:50 connect=connect, 11:54:50 read=read, 11:54:50 redirect=redirect, 11:54:50 status=status_count, 11:54:50 other=other, 11:54:50 history=history, 11:54:50 ) 11:54:50 11:54:50 if new_retry.is_exhausted(): 11:54:50 reason = error or ResponseError(cause) 11:54:50 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 11:54:50 11:54:50 During handling of the above exception, another exception occurred: 11:54:50 11:54:50 self = 11:54:50 11:54:50 def test_18_xpdr_device_not_connected(self): 11:54:50 > response = test_utils.get_portmapping_node_attr("XPDRA01", "node-info", None) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 11:54:50 transportpce_tests/1.2.1/test01_portmapping.py:205: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 11:54:50 response = get_request(target_url) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 transportpce_tests/common/test_utils.py:117: in get_request 11:54:50 return requests.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 11:54:50 return session.request(method=method, url=url, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:54:50 resp = self.send(prep, **send_kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:54:50 r = adapter.send(request, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 except (ProtocolError, OSError) as err: 11:54:50 raise ConnectionError(err, request=request) 11:54:50 11:54:50 except MaxRetryError as e: 11:54:50 if isinstance(e.reason, ConnectTimeoutError): 11:54:50 # TODO: Remove this in 3.0.0: see #2811 11:54:50 if not isinstance(e.reason, NewConnectionError): 11:54:50 raise ConnectTimeout(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, ResponseError): 11:54:50 raise RetryError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _ProxyError): 11:54:50 raise ProxyError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _SSLError): 11:54:50 # This branch is for urllib3 v1.22 and later. 11:54:50 raise SSLError(e, request=request) 11:54:50 11:54:50 > raise ConnectionError(e, request=request) 11:54:50 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 11:54:50 ----------------------------- Captured stdout call ----------------------------- 11:54:50 execution of test_18_xpdr_device_not_connected 11:54:50 _________ TestTransportPCEPortmapping.test_19_rdm_device_disconnection _________ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 > sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:54:50 raise err 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 address = ('localhost', 8191), timeout = 30, source_address = None 11:54:50 socket_options = [(6, 1, 1)] 11:54:50 11:54:50 def create_connection( 11:54:50 address: tuple[str, int], 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 source_address: tuple[str, int] | None = None, 11:54:50 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:54:50 ) -> socket.socket: 11:54:50 """Connect to *address* and return the socket object. 11:54:50 11:54:50 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:54:50 port)``) and return the socket object. Passing the optional 11:54:50 *timeout* parameter will set the timeout on the socket instance 11:54:50 before attempting to connect. If no *timeout* is supplied, the 11:54:50 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:54:50 is used. If *source_address* is set it must be a tuple of (host, port) 11:54:50 for the socket to bind as a source address before making the connection. 11:54:50 An host of '' or port 0 tells the OS to use the default. 11:54:50 """ 11:54:50 11:54:50 host, port = address 11:54:50 if host.startswith("["): 11:54:50 host = host.strip("[]") 11:54:50 err = None 11:54:50 11:54:50 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:54:50 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:54:50 # The original create_connection function always returns all records. 11:54:50 family = allowed_gai_family() 11:54:50 11:54:50 try: 11:54:50 host.encode("idna") 11:54:50 except UnicodeError: 11:54:50 raise LocationParseError(f"'{host}', label empty or too long") from None 11:54:50 11:54:50 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:54:50 af, socktype, proto, canonname, sa = res 11:54:50 sock = None 11:54:50 try: 11:54:50 sock = socket.socket(af, socktype, proto) 11:54:50 11:54:50 # If provided, set socket level options before connecting. 11:54:50 _set_socket_options(sock, socket_options) 11:54:50 11:54:50 if timeout is not _DEFAULT_TIMEOUT: 11:54:50 sock.settimeout(timeout) 11:54:50 if source_address: 11:54:50 sock.bind(source_address) 11:54:50 > sock.connect(sa) 11:54:50 E ConnectionRefusedError: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 method = 'DELETE' 11:54:50 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01' 11:54:50 body = None 11:54:50 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:54:50 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 redirect = False, assert_same_host = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:54:50 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:54:50 decode_content = False, response_kw = {} 11:54:50 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01', query=None, fragment=None) 11:54:50 destination_scheme = None, conn = None, release_this_conn = True 11:54:50 http_tunnel_required = False, err = None, clean_exit = False 11:54:50 11:54:50 def urlopen( # type: ignore[override] 11:54:50 self, 11:54:50 method: str, 11:54:50 url: str, 11:54:50 body: _TYPE_BODY | None = None, 11:54:50 headers: typing.Mapping[str, str] | None = None, 11:54:50 retries: Retry | bool | int | None = None, 11:54:50 redirect: bool = True, 11:54:50 assert_same_host: bool = True, 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 pool_timeout: int | None = None, 11:54:50 release_conn: bool | None = None, 11:54:50 chunked: bool = False, 11:54:50 body_pos: _TYPE_BODY_POSITION | None = None, 11:54:50 preload_content: bool = True, 11:54:50 decode_content: bool = True, 11:54:50 **response_kw: typing.Any, 11:54:50 ) -> BaseHTTPResponse: 11:54:50 """ 11:54:50 Get a connection from the pool and perform an HTTP request. This is the 11:54:50 lowest level call for making a request, so you'll need to specify all 11:54:50 the raw details. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 More commonly, it's appropriate to use a convenience method 11:54:50 such as :meth:`request`. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 `release_conn` will only behave as expected if 11:54:50 `preload_content=False` because we want to make 11:54:50 `preload_content=False` the default behaviour someday soon without 11:54:50 breaking backwards compatibility. 11:54:50 11:54:50 :param method: 11:54:50 HTTP request method (such as GET, POST, PUT, etc.) 11:54:50 11:54:50 :param url: 11:54:50 The URL to perform the request on. 11:54:50 11:54:50 :param body: 11:54:50 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:54:50 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:54:50 11:54:50 :param headers: 11:54:50 Dictionary of custom headers to send, such as User-Agent, 11:54:50 If-None-Match, etc. If None, pool headers are used. If provided, 11:54:50 these headers completely replace any pool-specific headers. 11:54:50 11:54:50 :param retries: 11:54:50 Configure the number of retries to allow before raising a 11:54:50 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:54:50 11:54:50 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:54:50 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:54:50 over different types of retries. 11:54:50 Pass an integer number to retry connection errors that many times, 11:54:50 but no other types of errors. Pass zero to never retry. 11:54:50 11:54:50 If ``False``, then retries are disabled and any exception is raised 11:54:50 immediately. Also, instead of raising a MaxRetryError on redirects, 11:54:50 the redirect response will be returned. 11:54:50 11:54:50 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:54:50 11:54:50 :param redirect: 11:54:50 If True, automatically handle redirects (status codes 301, 302, 11:54:50 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:54:50 will disable redirect, too. 11:54:50 11:54:50 :param assert_same_host: 11:54:50 If ``True``, will make sure that the host of the pool requests is 11:54:50 consistent else will raise HostChangedError. When ``False``, you can 11:54:50 use the pool on an HTTP proxy and request foreign hosts. 11:54:50 11:54:50 :param timeout: 11:54:50 If specified, overrides the default timeout for this one 11:54:50 request. It may be a float (in seconds) or an instance of 11:54:50 :class:`urllib3.util.Timeout`. 11:54:50 11:54:50 :param pool_timeout: 11:54:50 If set and the pool is set to block=True, then this method will 11:54:50 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:54:50 connection is available within the time period. 11:54:50 11:54:50 :param bool preload_content: 11:54:50 If True, the response's body will be preloaded into memory. 11:54:50 11:54:50 :param bool decode_content: 11:54:50 If True, will attempt to decode the body based on the 11:54:50 'content-encoding' header. 11:54:50 11:54:50 :param release_conn: 11:54:50 If False, then the urlopen call will not release the connection 11:54:50 back into the pool once a response is received (but will release if 11:54:50 you read the entire contents of the response such as when 11:54:50 `preload_content=True`). This is useful if you're not preloading 11:54:50 the response's content immediately. You will need to call 11:54:50 ``r.release_conn()`` on the response ``r`` to return the connection 11:54:50 back into the pool. If None, it takes the value of ``preload_content`` 11:54:50 which defaults to ``True``. 11:54:50 11:54:50 :param bool chunked: 11:54:50 If True, urllib3 will send the body using chunked transfer 11:54:50 encoding. Otherwise, urllib3 will send the body using the standard 11:54:50 content-length form. Defaults to False. 11:54:50 11:54:50 :param int body_pos: 11:54:50 Position to seek to in file-like body in the event of a retry or 11:54:50 redirect. Typically this won't need to be set because urllib3 will 11:54:50 auto-populate the value when needed. 11:54:50 """ 11:54:50 parsed_url = parse_url(url) 11:54:50 destination_scheme = parsed_url.scheme 11:54:50 11:54:50 if headers is None: 11:54:50 headers = self.headers 11:54:50 11:54:50 if not isinstance(retries, Retry): 11:54:50 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:54:50 11:54:50 if release_conn is None: 11:54:50 release_conn = preload_content 11:54:50 11:54:50 # Check host 11:54:50 if assert_same_host and not self.is_same_host(url): 11:54:50 raise HostChangedError(self, url, retries) 11:54:50 11:54:50 # Ensure that the URL we're connecting to is properly encoded 11:54:50 if url.startswith("/"): 11:54:50 url = to_str(_encode_target(url)) 11:54:50 else: 11:54:50 url = to_str(parsed_url.url) 11:54:50 11:54:50 conn = None 11:54:50 11:54:50 # Track whether `conn` needs to be released before 11:54:50 # returning/raising/recursing. Update this variable if necessary, and 11:54:50 # leave `release_conn` constant throughout the function. That way, if 11:54:50 # the function recurses, the original value of `release_conn` will be 11:54:50 # passed down into the recursive call, and its value will be respected. 11:54:50 # 11:54:50 # See issue #651 [1] for details. 11:54:50 # 11:54:50 # [1] 11:54:50 release_this_conn = release_conn 11:54:50 11:54:50 http_tunnel_required = connection_requires_http_tunnel( 11:54:50 self.proxy, self.proxy_config, destination_scheme 11:54:50 ) 11:54:50 11:54:50 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:54:50 # have to copy the headers dict so we can safely change it without those 11:54:50 # changes being reflected in anyone else's copy. 11:54:50 if not http_tunnel_required: 11:54:50 headers = headers.copy() # type: ignore[attr-defined] 11:54:50 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:54:50 11:54:50 # Must keep the exception bound to a separate variable or else Python 3 11:54:50 # complains about UnboundLocalError. 11:54:50 err = None 11:54:50 11:54:50 # Keep track of whether we cleanly exited the except block. This 11:54:50 # ensures we do proper cleanup in finally. 11:54:50 clean_exit = False 11:54:50 11:54:50 # Rewind body position, if needed. Record current position 11:54:50 # for future rewinds in the event of a redirect/retry. 11:54:50 body_pos = set_file_position(body, body_pos) 11:54:50 11:54:50 try: 11:54:50 # Request a connection from the queue. 11:54:50 timeout_obj = self._get_timeout(timeout) 11:54:50 conn = self._get_conn(timeout=pool_timeout) 11:54:50 11:54:50 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:54:50 11:54:50 # Is this a closed/new connection that requires CONNECT tunnelling? 11:54:50 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:54:50 try: 11:54:50 self._prepare_proxy(conn) 11:54:50 except (BaseSSLError, OSError, SocketTimeout) as e: 11:54:50 self._raise_timeout( 11:54:50 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:54:50 ) 11:54:50 raise 11:54:50 11:54:50 # If we're going to release the connection in ``finally:``, then 11:54:50 # the response doesn't need to know about the connection. Otherwise 11:54:50 # it will also try to release it and we'll have a double-release 11:54:50 # mess. 11:54:50 response_conn = conn if not release_conn else None 11:54:50 11:54:50 # Make the request on the HTTPConnection object 11:54:50 > response = self._make_request( 11:54:50 conn, 11:54:50 method, 11:54:50 url, 11:54:50 timeout=timeout_obj, 11:54:50 body=body, 11:54:50 headers=headers, 11:54:50 chunked=chunked, 11:54:50 retries=retries, 11:54:50 response_conn=response_conn, 11:54:50 preload_content=preload_content, 11:54:50 decode_content=decode_content, 11:54:50 **response_kw, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:54:50 conn.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 11:54:50 self.endheaders() 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:54:50 self._send_output(message_body, encode_chunked=encode_chunked) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:54:50 self.send(msg) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:54:50 self.connect() 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 11:54:50 self.sock = self._new_conn() 11:54:50 ^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 except socket.gaierror as e: 11:54:50 raise NameResolutionError(self.host, self, e) from e 11:54:50 except SocketTimeout as e: 11:54:50 raise ConnectTimeoutError( 11:54:50 self, 11:54:50 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:54:50 ) from e 11:54:50 11:54:50 except OSError as e: 11:54:50 > raise NewConnectionError( 11:54:50 self, f"Failed to establish a new connection: {e}" 11:54:50 ) from e 11:54:50 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 > resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:54:50 retries = retries.increment( 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 method = 'DELETE' 11:54:50 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01' 11:54:50 response = None 11:54:50 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 11:54:50 _pool = 11:54:50 _stacktrace = 11:54:50 11:54:50 def increment( 11:54:50 self, 11:54:50 method: str | None = None, 11:54:50 url: str | None = None, 11:54:50 response: BaseHTTPResponse | None = None, 11:54:50 error: Exception | None = None, 11:54:50 _pool: ConnectionPool | None = None, 11:54:50 _stacktrace: TracebackType | None = None, 11:54:50 ) -> Self: 11:54:50 """Return a new Retry object with incremented retry counters. 11:54:50 11:54:50 :param response: A response object, or None, if the server did not 11:54:50 return a response. 11:54:50 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:54:50 :param Exception error: An error encountered during the request, or 11:54:50 None if the response was received successfully. 11:54:50 11:54:50 :return: A new ``Retry`` object. 11:54:50 """ 11:54:50 if self.total is False and error: 11:54:50 # Disabled, indicate to re-raise the error. 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 11:54:50 total = self.total 11:54:50 if total is not None: 11:54:50 total -= 1 11:54:50 11:54:50 connect = self.connect 11:54:50 read = self.read 11:54:50 redirect = self.redirect 11:54:50 status_count = self.status 11:54:50 other = self.other 11:54:50 cause = "unknown" 11:54:50 status = None 11:54:50 redirect_location = None 11:54:50 11:54:50 if error and self._is_connection_error(error): 11:54:50 # Connect retry? 11:54:50 if connect is False: 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif connect is not None: 11:54:50 connect -= 1 11:54:50 11:54:50 elif error and self._is_read_error(error): 11:54:50 # Read retry? 11:54:50 if read is False or method is None or not self._is_method_retryable(method): 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif read is not None: 11:54:50 read -= 1 11:54:50 11:54:50 elif error: 11:54:50 # Other retry? 11:54:50 if other is not None: 11:54:50 other -= 1 11:54:50 11:54:50 elif response and response.get_redirect_location(): 11:54:50 # Redirect retry? 11:54:50 if redirect is not None: 11:54:50 redirect -= 1 11:54:50 cause = "too many redirects" 11:54:50 response_redirect_location = response.get_redirect_location() 11:54:50 if response_redirect_location: 11:54:50 redirect_location = response_redirect_location 11:54:50 status = response.status 11:54:50 11:54:50 else: 11:54:50 # Incrementing because of a server error like a 500 in 11:54:50 # status_forcelist and the given method is in the allowed_methods 11:54:50 cause = ResponseError.GENERIC_ERROR 11:54:50 if response and response.status: 11:54:50 if status_count is not None: 11:54:50 status_count -= 1 11:54:50 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:54:50 status = response.status 11:54:50 11:54:50 history = self.history + ( 11:54:50 RequestHistory(method, url, error, status, redirect_location), 11:54:50 ) 11:54:50 11:54:50 new_retry = self.new( 11:54:50 total=total, 11:54:50 connect=connect, 11:54:50 read=read, 11:54:50 redirect=redirect, 11:54:50 status=status_count, 11:54:50 other=other, 11:54:50 history=history, 11:54:50 ) 11:54:50 11:54:50 if new_retry.is_exhausted(): 11:54:50 reason = error or ResponseError(cause) 11:54:50 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 11:54:50 11:54:50 During handling of the above exception, another exception occurred: 11:54:50 11:54:50 self = 11:54:50 11:54:50 def test_19_rdm_device_disconnection(self): 11:54:50 > response = test_utils.unmount_device("ROADMA01") 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 11:54:50 transportpce_tests/1.2.1/test01_portmapping.py:213: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 transportpce_tests/common/test_utils.py:398: in unmount_device 11:54:50 response = delete_request(url[RESTCONF_VERSION].format('{}', node)) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 transportpce_tests/common/test_utils.py:134: in delete_request 11:54:50 return requests.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 11:54:50 return session.request(method=method, url=url, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:54:50 resp = self.send(prep, **send_kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:54:50 r = adapter.send(request, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 except (ProtocolError, OSError) as err: 11:54:50 raise ConnectionError(err, request=request) 11:54:50 11:54:50 except MaxRetryError as e: 11:54:50 if isinstance(e.reason, ConnectTimeoutError): 11:54:50 # TODO: Remove this in 3.0.0: see #2811 11:54:50 if not isinstance(e.reason, NewConnectionError): 11:54:50 raise ConnectTimeout(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, ResponseError): 11:54:50 raise RetryError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _ProxyError): 11:54:50 raise ProxyError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _SSLError): 11:54:50 # This branch is for urllib3 v1.22 and later. 11:54:50 raise SSLError(e, request=request) 11:54:50 11:54:50 > raise ConnectionError(e, request=request) 11:54:50 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 11:54:50 ----------------------------- Captured stdout call ----------------------------- 11:54:50 execution of test_19_rdm_device_disconnection 11:54:50 _________ TestTransportPCEPortmapping.test_20_rdm_device_disconnected __________ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 > sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:54:50 raise err 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 address = ('localhost', 8191), timeout = 30, source_address = None 11:54:50 socket_options = [(6, 1, 1)] 11:54:50 11:54:50 def create_connection( 11:54:50 address: tuple[str, int], 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 source_address: tuple[str, int] | None = None, 11:54:50 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:54:50 ) -> socket.socket: 11:54:50 """Connect to *address* and return the socket object. 11:54:50 11:54:50 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:54:50 port)``) and return the socket object. Passing the optional 11:54:50 *timeout* parameter will set the timeout on the socket instance 11:54:50 before attempting to connect. If no *timeout* is supplied, the 11:54:50 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:54:50 is used. If *source_address* is set it must be a tuple of (host, port) 11:54:50 for the socket to bind as a source address before making the connection. 11:54:50 An host of '' or port 0 tells the OS to use the default. 11:54:50 """ 11:54:50 11:54:50 host, port = address 11:54:50 if host.startswith("["): 11:54:50 host = host.strip("[]") 11:54:50 err = None 11:54:50 11:54:50 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:54:50 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:54:50 # The original create_connection function always returns all records. 11:54:50 family = allowed_gai_family() 11:54:50 11:54:50 try: 11:54:50 host.encode("idna") 11:54:50 except UnicodeError: 11:54:50 raise LocationParseError(f"'{host}', label empty or too long") from None 11:54:50 11:54:50 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:54:50 af, socktype, proto, canonname, sa = res 11:54:50 sock = None 11:54:50 try: 11:54:50 sock = socket.socket(af, socktype, proto) 11:54:50 11:54:50 # If provided, set socket level options before connecting. 11:54:50 _set_socket_options(sock, socket_options) 11:54:50 11:54:50 if timeout is not _DEFAULT_TIMEOUT: 11:54:50 sock.settimeout(timeout) 11:54:50 if source_address: 11:54:50 sock.bind(source_address) 11:54:50 > sock.connect(sa) 11:54:50 E ConnectionRefusedError: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig' 11:54:50 body = None 11:54:50 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:54:50 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 redirect = False, assert_same_host = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:54:50 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:54:50 decode_content = False, response_kw = {} 11:54:50 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01', query='content=nonconfig', fragment=None) 11:54:50 destination_scheme = None, conn = None, release_this_conn = True 11:54:50 http_tunnel_required = False, err = None, clean_exit = False 11:54:50 11:54:50 def urlopen( # type: ignore[override] 11:54:50 self, 11:54:50 method: str, 11:54:50 url: str, 11:54:50 body: _TYPE_BODY | None = None, 11:54:50 headers: typing.Mapping[str, str] | None = None, 11:54:50 retries: Retry | bool | int | None = None, 11:54:50 redirect: bool = True, 11:54:50 assert_same_host: bool = True, 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 pool_timeout: int | None = None, 11:54:50 release_conn: bool | None = None, 11:54:50 chunked: bool = False, 11:54:50 body_pos: _TYPE_BODY_POSITION | None = None, 11:54:50 preload_content: bool = True, 11:54:50 decode_content: bool = True, 11:54:50 **response_kw: typing.Any, 11:54:50 ) -> BaseHTTPResponse: 11:54:50 """ 11:54:50 Get a connection from the pool and perform an HTTP request. This is the 11:54:50 lowest level call for making a request, so you'll need to specify all 11:54:50 the raw details. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 More commonly, it's appropriate to use a convenience method 11:54:50 such as :meth:`request`. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 `release_conn` will only behave as expected if 11:54:50 `preload_content=False` because we want to make 11:54:50 `preload_content=False` the default behaviour someday soon without 11:54:50 breaking backwards compatibility. 11:54:50 11:54:50 :param method: 11:54:50 HTTP request method (such as GET, POST, PUT, etc.) 11:54:50 11:54:50 :param url: 11:54:50 The URL to perform the request on. 11:54:50 11:54:50 :param body: 11:54:50 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:54:50 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:54:50 11:54:50 :param headers: 11:54:50 Dictionary of custom headers to send, such as User-Agent, 11:54:50 If-None-Match, etc. If None, pool headers are used. If provided, 11:54:50 these headers completely replace any pool-specific headers. 11:54:50 11:54:50 :param retries: 11:54:50 Configure the number of retries to allow before raising a 11:54:50 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:54:50 11:54:50 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:54:50 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:54:50 over different types of retries. 11:54:50 Pass an integer number to retry connection errors that many times, 11:54:50 but no other types of errors. Pass zero to never retry. 11:54:50 11:54:50 If ``False``, then retries are disabled and any exception is raised 11:54:50 immediately. Also, instead of raising a MaxRetryError on redirects, 11:54:50 the redirect response will be returned. 11:54:50 11:54:50 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:54:50 11:54:50 :param redirect: 11:54:50 If True, automatically handle redirects (status codes 301, 302, 11:54:50 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:54:50 will disable redirect, too. 11:54:50 11:54:50 :param assert_same_host: 11:54:50 If ``True``, will make sure that the host of the pool requests is 11:54:50 consistent else will raise HostChangedError. When ``False``, you can 11:54:50 use the pool on an HTTP proxy and request foreign hosts. 11:54:50 11:54:50 :param timeout: 11:54:50 If specified, overrides the default timeout for this one 11:54:50 request. It may be a float (in seconds) or an instance of 11:54:50 :class:`urllib3.util.Timeout`. 11:54:50 11:54:50 :param pool_timeout: 11:54:50 If set and the pool is set to block=True, then this method will 11:54:50 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:54:50 connection is available within the time period. 11:54:50 11:54:50 :param bool preload_content: 11:54:50 If True, the response's body will be preloaded into memory. 11:54:50 11:54:50 :param bool decode_content: 11:54:50 If True, will attempt to decode the body based on the 11:54:50 'content-encoding' header. 11:54:50 11:54:50 :param release_conn: 11:54:50 If False, then the urlopen call will not release the connection 11:54:50 back into the pool once a response is received (but will release if 11:54:50 you read the entire contents of the response such as when 11:54:50 `preload_content=True`). This is useful if you're not preloading 11:54:50 the response's content immediately. You will need to call 11:54:50 ``r.release_conn()`` on the response ``r`` to return the connection 11:54:50 back into the pool. If None, it takes the value of ``preload_content`` 11:54:50 which defaults to ``True``. 11:54:50 11:54:50 :param bool chunked: 11:54:50 If True, urllib3 will send the body using chunked transfer 11:54:50 encoding. Otherwise, urllib3 will send the body using the standard 11:54:50 content-length form. Defaults to False. 11:54:50 11:54:50 :param int body_pos: 11:54:50 Position to seek to in file-like body in the event of a retry or 11:54:50 redirect. Typically this won't need to be set because urllib3 will 11:54:50 auto-populate the value when needed. 11:54:50 """ 11:54:50 parsed_url = parse_url(url) 11:54:50 destination_scheme = parsed_url.scheme 11:54:50 11:54:50 if headers is None: 11:54:50 headers = self.headers 11:54:50 11:54:50 if not isinstance(retries, Retry): 11:54:50 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:54:50 11:54:50 if release_conn is None: 11:54:50 release_conn = preload_content 11:54:50 11:54:50 # Check host 11:54:50 if assert_same_host and not self.is_same_host(url): 11:54:50 raise HostChangedError(self, url, retries) 11:54:50 11:54:50 # Ensure that the URL we're connecting to is properly encoded 11:54:50 if url.startswith("/"): 11:54:50 url = to_str(_encode_target(url)) 11:54:50 else: 11:54:50 url = to_str(parsed_url.url) 11:54:50 11:54:50 conn = None 11:54:50 11:54:50 # Track whether `conn` needs to be released before 11:54:50 # returning/raising/recursing. Update this variable if necessary, and 11:54:50 # leave `release_conn` constant throughout the function. That way, if 11:54:50 # the function recurses, the original value of `release_conn` will be 11:54:50 # passed down into the recursive call, and its value will be respected. 11:54:50 # 11:54:50 # See issue #651 [1] for details. 11:54:50 # 11:54:50 # [1] 11:54:50 release_this_conn = release_conn 11:54:50 11:54:50 http_tunnel_required = connection_requires_http_tunnel( 11:54:50 self.proxy, self.proxy_config, destination_scheme 11:54:50 ) 11:54:50 11:54:50 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:54:50 # have to copy the headers dict so we can safely change it without those 11:54:50 # changes being reflected in anyone else's copy. 11:54:50 if not http_tunnel_required: 11:54:50 headers = headers.copy() # type: ignore[attr-defined] 11:54:50 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:54:50 11:54:50 # Must keep the exception bound to a separate variable or else Python 3 11:54:50 # complains about UnboundLocalError. 11:54:50 err = None 11:54:50 11:54:50 # Keep track of whether we cleanly exited the except block. This 11:54:50 # ensures we do proper cleanup in finally. 11:54:50 clean_exit = False 11:54:50 11:54:50 # Rewind body position, if needed. Record current position 11:54:50 # for future rewinds in the event of a redirect/retry. 11:54:50 body_pos = set_file_position(body, body_pos) 11:54:50 11:54:50 try: 11:54:50 # Request a connection from the queue. 11:54:50 timeout_obj = self._get_timeout(timeout) 11:54:50 conn = self._get_conn(timeout=pool_timeout) 11:54:50 11:54:50 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:54:50 11:54:50 # Is this a closed/new connection that requires CONNECT tunnelling? 11:54:50 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:54:50 try: 11:54:50 self._prepare_proxy(conn) 11:54:50 except (BaseSSLError, OSError, SocketTimeout) as e: 11:54:50 self._raise_timeout( 11:54:50 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:54:50 ) 11:54:50 raise 11:54:50 11:54:50 # If we're going to release the connection in ``finally:``, then 11:54:50 # the response doesn't need to know about the connection. Otherwise 11:54:50 # it will also try to release it and we'll have a double-release 11:54:50 # mess. 11:54:50 response_conn = conn if not release_conn else None 11:54:50 11:54:50 # Make the request on the HTTPConnection object 11:54:50 > response = self._make_request( 11:54:50 conn, 11:54:50 method, 11:54:50 url, 11:54:50 timeout=timeout_obj, 11:54:50 body=body, 11:54:50 headers=headers, 11:54:50 chunked=chunked, 11:54:50 retries=retries, 11:54:50 response_conn=response_conn, 11:54:50 preload_content=preload_content, 11:54:50 decode_content=decode_content, 11:54:50 **response_kw, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:54:50 conn.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 11:54:50 self.endheaders() 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:54:50 self._send_output(message_body, encode_chunked=encode_chunked) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:54:50 self.send(msg) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:54:50 self.connect() 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 11:54:50 self.sock = self._new_conn() 11:54:50 ^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 except socket.gaierror as e: 11:54:50 raise NameResolutionError(self.host, self, e) from e 11:54:50 except SocketTimeout as e: 11:54:50 raise ConnectTimeoutError( 11:54:50 self, 11:54:50 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:54:50 ) from e 11:54:50 11:54:50 except OSError as e: 11:54:50 > raise NewConnectionError( 11:54:50 self, f"Failed to establish a new connection: {e}" 11:54:50 ) from e 11:54:50 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 > resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:54:50 retries = retries.increment( 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig' 11:54:50 response = None 11:54:50 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 11:54:50 _pool = 11:54:50 _stacktrace = 11:54:50 11:54:50 def increment( 11:54:50 self, 11:54:50 method: str | None = None, 11:54:50 url: str | None = None, 11:54:50 response: BaseHTTPResponse | None = None, 11:54:50 error: Exception | None = None, 11:54:50 _pool: ConnectionPool | None = None, 11:54:50 _stacktrace: TracebackType | None = None, 11:54:50 ) -> Self: 11:54:50 """Return a new Retry object with incremented retry counters. 11:54:50 11:54:50 :param response: A response object, or None, if the server did not 11:54:50 return a response. 11:54:50 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:54:50 :param Exception error: An error encountered during the request, or 11:54:50 None if the response was received successfully. 11:54:50 11:54:50 :return: A new ``Retry`` object. 11:54:50 """ 11:54:50 if self.total is False and error: 11:54:50 # Disabled, indicate to re-raise the error. 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 11:54:50 total = self.total 11:54:50 if total is not None: 11:54:50 total -= 1 11:54:50 11:54:50 connect = self.connect 11:54:50 read = self.read 11:54:50 redirect = self.redirect 11:54:50 status_count = self.status 11:54:50 other = self.other 11:54:50 cause = "unknown" 11:54:50 status = None 11:54:50 redirect_location = None 11:54:50 11:54:50 if error and self._is_connection_error(error): 11:54:50 # Connect retry? 11:54:50 if connect is False: 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif connect is not None: 11:54:50 connect -= 1 11:54:50 11:54:50 elif error and self._is_read_error(error): 11:54:50 # Read retry? 11:54:50 if read is False or method is None or not self._is_method_retryable(method): 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif read is not None: 11:54:50 read -= 1 11:54:50 11:54:50 elif error: 11:54:50 # Other retry? 11:54:50 if other is not None: 11:54:50 other -= 1 11:54:50 11:54:50 elif response and response.get_redirect_location(): 11:54:50 # Redirect retry? 11:54:50 if redirect is not None: 11:54:50 redirect -= 1 11:54:50 cause = "too many redirects" 11:54:50 response_redirect_location = response.get_redirect_location() 11:54:50 if response_redirect_location: 11:54:50 redirect_location = response_redirect_location 11:54:50 status = response.status 11:54:50 11:54:50 else: 11:54:50 # Incrementing because of a server error like a 500 in 11:54:50 # status_forcelist and the given method is in the allowed_methods 11:54:50 cause = ResponseError.GENERIC_ERROR 11:54:50 if response and response.status: 11:54:50 if status_count is not None: 11:54:50 status_count -= 1 11:54:50 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:54:50 status = response.status 11:54:50 11:54:50 history = self.history + ( 11:54:50 RequestHistory(method, url, error, status, redirect_location), 11:54:50 ) 11:54:50 11:54:50 new_retry = self.new( 11:54:50 total=total, 11:54:50 connect=connect, 11:54:50 read=read, 11:54:50 redirect=redirect, 11:54:50 status=status_count, 11:54:50 other=other, 11:54:50 history=history, 11:54:50 ) 11:54:50 11:54:50 if new_retry.is_exhausted(): 11:54:50 reason = error or ResponseError(cause) 11:54:50 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 11:54:50 11:54:50 During handling of the above exception, another exception occurred: 11:54:50 11:54:50 self = 11:54:50 11:54:50 def test_20_rdm_device_disconnected(self): 11:54:50 > response = test_utils.check_device_connection("ROADMA01") 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 11:54:50 transportpce_tests/1.2.1/test01_portmapping.py:217: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 transportpce_tests/common/test_utils.py:409: in check_device_connection 11:54:50 response = get_request(url[RESTCONF_VERSION].format('{}', node)) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 transportpce_tests/common/test_utils.py:117: in get_request 11:54:50 return requests.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 11:54:50 return session.request(method=method, url=url, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:54:50 resp = self.send(prep, **send_kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:54:50 r = adapter.send(request, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 except (ProtocolError, OSError) as err: 11:54:50 raise ConnectionError(err, request=request) 11:54:50 11:54:50 except MaxRetryError as e: 11:54:50 if isinstance(e.reason, ConnectTimeoutError): 11:54:50 # TODO: Remove this in 3.0.0: see #2811 11:54:50 if not isinstance(e.reason, NewConnectionError): 11:54:50 raise ConnectTimeout(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, ResponseError): 11:54:50 raise RetryError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _ProxyError): 11:54:50 raise ProxyError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _SSLError): 11:54:50 # This branch is for urllib3 v1.22 and later. 11:54:50 raise SSLError(e, request=request) 11:54:50 11:54:50 > raise ConnectionError(e, request=request) 11:54:50 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 11:54:50 ----------------------------- Captured stdout call ----------------------------- 11:54:50 execution of test_20_rdm_device_disconnected 11:54:50 _________ TestTransportPCEPortmapping.test_21_rdm_device_not_connected _________ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 > sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:54:50 raise err 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 address = ('localhost', 8191), timeout = 30, source_address = None 11:54:50 socket_options = [(6, 1, 1)] 11:54:50 11:54:50 def create_connection( 11:54:50 address: tuple[str, int], 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 source_address: tuple[str, int] | None = None, 11:54:50 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:54:50 ) -> socket.socket: 11:54:50 """Connect to *address* and return the socket object. 11:54:50 11:54:50 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:54:50 port)``) and return the socket object. Passing the optional 11:54:50 *timeout* parameter will set the timeout on the socket instance 11:54:50 before attempting to connect. If no *timeout* is supplied, the 11:54:50 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:54:50 is used. If *source_address* is set it must be a tuple of (host, port) 11:54:50 for the socket to bind as a source address before making the connection. 11:54:50 An host of '' or port 0 tells the OS to use the default. 11:54:50 """ 11:54:50 11:54:50 host, port = address 11:54:50 if host.startswith("["): 11:54:50 host = host.strip("[]") 11:54:50 err = None 11:54:50 11:54:50 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:54:50 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:54:50 # The original create_connection function always returns all records. 11:54:50 family = allowed_gai_family() 11:54:50 11:54:50 try: 11:54:50 host.encode("idna") 11:54:50 except UnicodeError: 11:54:50 raise LocationParseError(f"'{host}', label empty or too long") from None 11:54:50 11:54:50 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:54:50 af, socktype, proto, canonname, sa = res 11:54:50 sock = None 11:54:50 try: 11:54:50 sock = socket.socket(af, socktype, proto) 11:54:50 11:54:50 # If provided, set socket level options before connecting. 11:54:50 _set_socket_options(sock, socket_options) 11:54:50 11:54:50 if timeout is not _DEFAULT_TIMEOUT: 11:54:50 sock.settimeout(timeout) 11:54:50 if source_address: 11:54:50 sock.bind(source_address) 11:54:50 > sock.connect(sa) 11:54:50 E ConnectionRefusedError: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info' 11:54:50 body = None 11:54:50 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:54:50 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 redirect = False, assert_same_host = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:54:50 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:54:50 decode_content = False, response_kw = {} 11:54:50 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info', query=None, fragment=None) 11:54:50 destination_scheme = None, conn = None, release_this_conn = True 11:54:50 http_tunnel_required = False, err = None, clean_exit = False 11:54:50 11:54:50 def urlopen( # type: ignore[override] 11:54:50 self, 11:54:50 method: str, 11:54:50 url: str, 11:54:50 body: _TYPE_BODY | None = None, 11:54:50 headers: typing.Mapping[str, str] | None = None, 11:54:50 retries: Retry | bool | int | None = None, 11:54:50 redirect: bool = True, 11:54:50 assert_same_host: bool = True, 11:54:50 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:54:50 pool_timeout: int | None = None, 11:54:50 release_conn: bool | None = None, 11:54:50 chunked: bool = False, 11:54:50 body_pos: _TYPE_BODY_POSITION | None = None, 11:54:50 preload_content: bool = True, 11:54:50 decode_content: bool = True, 11:54:50 **response_kw: typing.Any, 11:54:50 ) -> BaseHTTPResponse: 11:54:50 """ 11:54:50 Get a connection from the pool and perform an HTTP request. This is the 11:54:50 lowest level call for making a request, so you'll need to specify all 11:54:50 the raw details. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 More commonly, it's appropriate to use a convenience method 11:54:50 such as :meth:`request`. 11:54:50 11:54:50 .. note:: 11:54:50 11:54:50 `release_conn` will only behave as expected if 11:54:50 `preload_content=False` because we want to make 11:54:50 `preload_content=False` the default behaviour someday soon without 11:54:50 breaking backwards compatibility. 11:54:50 11:54:50 :param method: 11:54:50 HTTP request method (such as GET, POST, PUT, etc.) 11:54:50 11:54:50 :param url: 11:54:50 The URL to perform the request on. 11:54:50 11:54:50 :param body: 11:54:50 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:54:50 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:54:50 11:54:50 :param headers: 11:54:50 Dictionary of custom headers to send, such as User-Agent, 11:54:50 If-None-Match, etc. If None, pool headers are used. If provided, 11:54:50 these headers completely replace any pool-specific headers. 11:54:50 11:54:50 :param retries: 11:54:50 Configure the number of retries to allow before raising a 11:54:50 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:54:50 11:54:50 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:54:50 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:54:50 over different types of retries. 11:54:50 Pass an integer number to retry connection errors that many times, 11:54:50 but no other types of errors. Pass zero to never retry. 11:54:50 11:54:50 If ``False``, then retries are disabled and any exception is raised 11:54:50 immediately. Also, instead of raising a MaxRetryError on redirects, 11:54:50 the redirect response will be returned. 11:54:50 11:54:50 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:54:50 11:54:50 :param redirect: 11:54:50 If True, automatically handle redirects (status codes 301, 302, 11:54:50 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:54:50 will disable redirect, too. 11:54:50 11:54:50 :param assert_same_host: 11:54:50 If ``True``, will make sure that the host of the pool requests is 11:54:50 consistent else will raise HostChangedError. When ``False``, you can 11:54:50 use the pool on an HTTP proxy and request foreign hosts. 11:54:50 11:54:50 :param timeout: 11:54:50 If specified, overrides the default timeout for this one 11:54:50 request. It may be a float (in seconds) or an instance of 11:54:50 :class:`urllib3.util.Timeout`. 11:54:50 11:54:50 :param pool_timeout: 11:54:50 If set and the pool is set to block=True, then this method will 11:54:50 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:54:50 connection is available within the time period. 11:54:50 11:54:50 :param bool preload_content: 11:54:50 If True, the response's body will be preloaded into memory. 11:54:50 11:54:50 :param bool decode_content: 11:54:50 If True, will attempt to decode the body based on the 11:54:50 'content-encoding' header. 11:54:50 11:54:50 :param release_conn: 11:54:50 If False, then the urlopen call will not release the connection 11:54:50 back into the pool once a response is received (but will release if 11:54:50 you read the entire contents of the response such as when 11:54:50 `preload_content=True`). This is useful if you're not preloading 11:54:50 the response's content immediately. You will need to call 11:54:50 ``r.release_conn()`` on the response ``r`` to return the connection 11:54:50 back into the pool. If None, it takes the value of ``preload_content`` 11:54:50 which defaults to ``True``. 11:54:50 11:54:50 :param bool chunked: 11:54:50 If True, urllib3 will send the body using chunked transfer 11:54:50 encoding. Otherwise, urllib3 will send the body using the standard 11:54:50 content-length form. Defaults to False. 11:54:50 11:54:50 :param int body_pos: 11:54:50 Position to seek to in file-like body in the event of a retry or 11:54:50 redirect. Typically this won't need to be set because urllib3 will 11:54:50 auto-populate the value when needed. 11:54:50 """ 11:54:50 parsed_url = parse_url(url) 11:54:50 destination_scheme = parsed_url.scheme 11:54:50 11:54:50 if headers is None: 11:54:50 headers = self.headers 11:54:50 11:54:50 if not isinstance(retries, Retry): 11:54:50 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:54:50 11:54:50 if release_conn is None: 11:54:50 release_conn = preload_content 11:54:50 11:54:50 # Check host 11:54:50 if assert_same_host and not self.is_same_host(url): 11:54:50 raise HostChangedError(self, url, retries) 11:54:50 11:54:50 # Ensure that the URL we're connecting to is properly encoded 11:54:50 if url.startswith("/"): 11:54:50 url = to_str(_encode_target(url)) 11:54:50 else: 11:54:50 url = to_str(parsed_url.url) 11:54:50 11:54:50 conn = None 11:54:50 11:54:50 # Track whether `conn` needs to be released before 11:54:50 # returning/raising/recursing. Update this variable if necessary, and 11:54:50 # leave `release_conn` constant throughout the function. That way, if 11:54:50 # the function recurses, the original value of `release_conn` will be 11:54:50 # passed down into the recursive call, and its value will be respected. 11:54:50 # 11:54:50 # See issue #651 [1] for details. 11:54:50 # 11:54:50 # [1] 11:54:50 release_this_conn = release_conn 11:54:50 11:54:50 http_tunnel_required = connection_requires_http_tunnel( 11:54:50 self.proxy, self.proxy_config, destination_scheme 11:54:50 ) 11:54:50 11:54:50 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:54:50 # have to copy the headers dict so we can safely change it without those 11:54:50 # changes being reflected in anyone else's copy. 11:54:50 if not http_tunnel_required: 11:54:50 headers = headers.copy() # type: ignore[attr-defined] 11:54:50 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:54:50 11:54:50 # Must keep the exception bound to a separate variable or else Python 3 11:54:50 # complains about UnboundLocalError. 11:54:50 err = None 11:54:50 11:54:50 # Keep track of whether we cleanly exited the except block. This 11:54:50 # ensures we do proper cleanup in finally. 11:54:50 clean_exit = False 11:54:50 11:54:50 # Rewind body position, if needed. Record current position 11:54:50 # for future rewinds in the event of a redirect/retry. 11:54:50 body_pos = set_file_position(body, body_pos) 11:54:50 11:54:50 try: 11:54:50 # Request a connection from the queue. 11:54:50 timeout_obj = self._get_timeout(timeout) 11:54:50 conn = self._get_conn(timeout=pool_timeout) 11:54:50 11:54:50 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:54:50 11:54:50 # Is this a closed/new connection that requires CONNECT tunnelling? 11:54:50 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:54:50 try: 11:54:50 self._prepare_proxy(conn) 11:54:50 except (BaseSSLError, OSError, SocketTimeout) as e: 11:54:50 self._raise_timeout( 11:54:50 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:54:50 ) 11:54:50 raise 11:54:50 11:54:50 # If we're going to release the connection in ``finally:``, then 11:54:50 # the response doesn't need to know about the connection. Otherwise 11:54:50 # it will also try to release it and we'll have a double-release 11:54:50 # mess. 11:54:50 response_conn = conn if not release_conn else None 11:54:50 11:54:50 # Make the request on the HTTPConnection object 11:54:50 > response = self._make_request( 11:54:50 conn, 11:54:50 method, 11:54:50 url, 11:54:50 timeout=timeout_obj, 11:54:50 body=body, 11:54:50 headers=headers, 11:54:50 chunked=chunked, 11:54:50 retries=retries, 11:54:50 response_conn=response_conn, 11:54:50 preload_content=preload_content, 11:54:50 decode_content=decode_content, 11:54:50 **response_kw, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:54:50 conn.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 11:54:50 self.endheaders() 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:54:50 self._send_output(message_body, encode_chunked=encode_chunked) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:54:50 self.send(msg) 11:54:50 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:54:50 self.connect() 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 11:54:50 self.sock = self._new_conn() 11:54:50 ^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 11:54:50 def _new_conn(self) -> socket.socket: 11:54:50 """Establish a socket connection and set nodelay settings on it. 11:54:50 11:54:50 :return: New socket connection. 11:54:50 """ 11:54:50 try: 11:54:50 sock = connection.create_connection( 11:54:50 (self._dns_host, self.port), 11:54:50 self.timeout, 11:54:50 source_address=self.source_address, 11:54:50 socket_options=self.socket_options, 11:54:50 ) 11:54:50 except socket.gaierror as e: 11:54:50 raise NameResolutionError(self.host, self, e) from e 11:54:50 except SocketTimeout as e: 11:54:50 raise ConnectTimeoutError( 11:54:50 self, 11:54:50 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:54:50 ) from e 11:54:50 11:54:50 except OSError as e: 11:54:50 > raise NewConnectionError( 11:54:50 self, f"Failed to establish a new connection: {e}" 11:54:50 ) from e 11:54:50 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 11:54:50 11:54:50 The above exception was the direct cause of the following exception: 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 > resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:54:50 retries = retries.increment( 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:54:50 method = 'GET' 11:54:50 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info' 11:54:50 response = None 11:54:50 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 11:54:50 _pool = 11:54:50 _stacktrace = 11:54:50 11:54:50 def increment( 11:54:50 self, 11:54:50 method: str | None = None, 11:54:50 url: str | None = None, 11:54:50 response: BaseHTTPResponse | None = None, 11:54:50 error: Exception | None = None, 11:54:50 _pool: ConnectionPool | None = None, 11:54:50 _stacktrace: TracebackType | None = None, 11:54:50 ) -> Self: 11:54:50 """Return a new Retry object with incremented retry counters. 11:54:50 11:54:50 :param response: A response object, or None, if the server did not 11:54:50 return a response. 11:54:50 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:54:50 :param Exception error: An error encountered during the request, or 11:54:50 None if the response was received successfully. 11:54:50 11:54:50 :return: A new ``Retry`` object. 11:54:50 """ 11:54:50 if self.total is False and error: 11:54:50 # Disabled, indicate to re-raise the error. 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 11:54:50 total = self.total 11:54:50 if total is not None: 11:54:50 total -= 1 11:54:50 11:54:50 connect = self.connect 11:54:50 read = self.read 11:54:50 redirect = self.redirect 11:54:50 status_count = self.status 11:54:50 other = self.other 11:54:50 cause = "unknown" 11:54:50 status = None 11:54:50 redirect_location = None 11:54:50 11:54:50 if error and self._is_connection_error(error): 11:54:50 # Connect retry? 11:54:50 if connect is False: 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif connect is not None: 11:54:50 connect -= 1 11:54:50 11:54:50 elif error and self._is_read_error(error): 11:54:50 # Read retry? 11:54:50 if read is False or method is None or not self._is_method_retryable(method): 11:54:50 raise reraise(type(error), error, _stacktrace) 11:54:50 elif read is not None: 11:54:50 read -= 1 11:54:50 11:54:50 elif error: 11:54:50 # Other retry? 11:54:50 if other is not None: 11:54:50 other -= 1 11:54:50 11:54:50 elif response and response.get_redirect_location(): 11:54:50 # Redirect retry? 11:54:50 if redirect is not None: 11:54:50 redirect -= 1 11:54:50 cause = "too many redirects" 11:54:50 response_redirect_location = response.get_redirect_location() 11:54:50 if response_redirect_location: 11:54:50 redirect_location = response_redirect_location 11:54:50 status = response.status 11:54:50 11:54:50 else: 11:54:50 # Incrementing because of a server error like a 500 in 11:54:50 # status_forcelist and the given method is in the allowed_methods 11:54:50 cause = ResponseError.GENERIC_ERROR 11:54:50 if response and response.status: 11:54:50 if status_count is not None: 11:54:50 status_count -= 1 11:54:50 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:54:50 status = response.status 11:54:50 11:54:50 history = self.history + ( 11:54:50 RequestHistory(method, url, error, status, redirect_location), 11:54:50 ) 11:54:50 11:54:50 new_retry = self.new( 11:54:50 total=total, 11:54:50 connect=connect, 11:54:50 read=read, 11:54:50 redirect=redirect, 11:54:50 status=status_count, 11:54:50 other=other, 11:54:50 history=history, 11:54:50 ) 11:54:50 11:54:50 if new_retry.is_exhausted(): 11:54:50 reason = error or ResponseError(cause) 11:54:50 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 11:54:50 11:54:50 During handling of the above exception, another exception occurred: 11:54:50 11:54:50 self = 11:54:50 11:54:50 def test_21_rdm_device_not_connected(self): 11:54:50 > response = test_utils.get_portmapping_node_attr("ROADMA01", "node-info", None) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 11:54:50 transportpce_tests/1.2.1/test01_portmapping.py:225: 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 11:54:50 response = get_request(target_url) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 transportpce_tests/common/test_utils.py:117: in get_request 11:54:50 return requests.request( 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 11:54:50 return session.request(method=method, url=url, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:54:50 resp = self.send(prep, **send_kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:54:50 r = adapter.send(request, **kwargs) 11:54:50 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:54:50 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:54:50 11:54:50 self = 11:54:50 request = , stream = False 11:54:50 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:54:50 proxies = OrderedDict() 11:54:50 11:54:50 def send( 11:54:50 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:54:50 ): 11:54:50 """Sends PreparedRequest object. Returns Response object. 11:54:50 11:54:50 :param request: The :class:`PreparedRequest ` being sent. 11:54:50 :param stream: (optional) Whether to stream the request content. 11:54:50 :param timeout: (optional) How long to wait for the server to send 11:54:50 data before giving up, as a float, or a :ref:`(connect timeout, 11:54:50 read timeout) ` tuple. 11:54:50 :type timeout: float or tuple or urllib3 Timeout object 11:54:50 :param verify: (optional) Either a boolean, in which case it controls whether 11:54:50 we verify the server's TLS certificate, or a string, in which case it 11:54:50 must be a path to a CA bundle to use 11:54:50 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:54:50 :param proxies: (optional) The proxies dictionary to apply to the request. 11:54:50 :rtype: requests.Response 11:54:50 """ 11:54:50 11:54:50 try: 11:54:50 conn = self.get_connection_with_tls_context( 11:54:50 request, verify, proxies=proxies, cert=cert 11:54:50 ) 11:54:50 except LocationValueError as e: 11:54:50 raise InvalidURL(e, request=request) 11:54:50 11:54:50 self.cert_verify(conn, request.url, verify, cert) 11:54:50 url = self.request_url(request, proxies) 11:54:50 self.add_headers( 11:54:50 request, 11:54:50 stream=stream, 11:54:50 timeout=timeout, 11:54:50 verify=verify, 11:54:50 cert=cert, 11:54:50 proxies=proxies, 11:54:50 ) 11:54:50 11:54:50 chunked = not (request.body is None or "Content-Length" in request.headers) 11:54:50 11:54:50 if isinstance(timeout, tuple): 11:54:50 try: 11:54:50 connect, read = timeout 11:54:50 timeout = TimeoutSauce(connect=connect, read=read) 11:54:50 except ValueError: 11:54:50 raise ValueError( 11:54:50 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:54:50 f"or a single float to set both timeouts to the same value." 11:54:50 ) 11:54:50 elif isinstance(timeout, TimeoutSauce): 11:54:50 pass 11:54:50 else: 11:54:50 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:54:50 11:54:50 try: 11:54:50 resp = conn.urlopen( 11:54:50 method=request.method, 11:54:50 url=url, 11:54:50 body=request.body, 11:54:50 headers=request.headers, 11:54:50 redirect=False, 11:54:50 assert_same_host=False, 11:54:50 preload_content=False, 11:54:50 decode_content=False, 11:54:50 retries=self.max_retries, 11:54:50 timeout=timeout, 11:54:50 chunked=chunked, 11:54:50 ) 11:54:50 11:54:50 except (ProtocolError, OSError) as err: 11:54:50 raise ConnectionError(err, request=request) 11:54:50 11:54:50 except MaxRetryError as e: 11:54:50 if isinstance(e.reason, ConnectTimeoutError): 11:54:50 # TODO: Remove this in 3.0.0: see #2811 11:54:50 if not isinstance(e.reason, NewConnectionError): 11:54:50 raise ConnectTimeout(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, ResponseError): 11:54:50 raise RetryError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _ProxyError): 11:54:50 raise ProxyError(e, request=request) 11:54:50 11:54:50 if isinstance(e.reason, _SSLError): 11:54:50 # This branch is for urllib3 v1.22 and later. 11:54:50 raise SSLError(e, request=request) 11:54:50 11:54:50 > raise ConnectionError(e, request=request) 11:54:50 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 11:54:50 11:54:50 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 11:54:50 ----------------------------- Captured stdout call ----------------------------- 11:54:50 execution of test_21_rdm_device_not_connected 11:54:50 --------------------------- Captured stdout teardown --------------------------- 11:54:50 all processes killed 11:54:50 ODL log file stored 11:54:50 =========================== short test summary info ============================ 11:54:50 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_02_rdm_device_connected 11:54:50 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_03_rdm_portmapping_info 11:54:50 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_04_rdm_portmapping_DEG1_TTP_TXRX 11:54:50 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_05_rdm_portmapping_SRG1_PP7_TXRX 11:54:50 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_06_rdm_portmapping_SRG3_PP1_TXRX 11:54:50 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_07_xpdr_device_connection 11:54:50 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_08_xpdr_device_connected 11:54:50 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_09_xpdr_portmapping_info 11:54:50 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_10_xpdr_portmapping_NETWORK1 11:54:50 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_11_xpdr_portmapping_NETWORK2 11:54:50 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_12_xpdr_portmapping_CLIENT1 11:54:50 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_13_xpdr_portmapping_CLIENT2 11:54:50 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_14_xpdr_portmapping_CLIENT3 11:54:50 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_15_xpdr_portmapping_CLIENT4 11:54:50 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_16_xpdr_device_disconnection 11:54:50 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_17_xpdr_device_disconnected 11:54:50 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_18_xpdr_device_not_connected 11:54:50 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_19_rdm_device_disconnection 11:54:50 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_20_rdm_device_disconnected 11:54:50 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_21_rdm_device_not_connected 11:54:50 20 failed, 1 passed in 271.84s (0:04:31) 11:54:51 tests71: OK ✔ in 7 minutes 44.85 seconds 11:54:51 tests200: OK ✔ in 3 minutes 37.76 seconds 11:54:51 tests121: exit 1 (272.14 seconds) /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 1.2.1 pid=9480 11:55:28 ............................................... [100%] 11:58:36 51 passed in 497.21s (0:08:17) 11:58:36 pytest -q transportpce_tests/tapi/test02_full_topology.py 11:59:26 .................................... [100%] 12:07:02 36 passed in 506.33s (0:08:26) 12:07:02 pytest -q transportpce_tests/tapi/test03_tapi_device_change_notifications.py 12:07:49 ....................................................................... [100%] 12:12:20 71 passed in 317.54s (0:05:17) 12:12:20 pytest -q transportpce_tests/tapi/test04_topo_extension.py 12:13:11 ................... [100%] 12:14:42 19 passed in 141.34s (0:02:21) 12:14:42 pytest -q transportpce_tests/tapi/test05_pce_tapi.py 12:16:45 ...................... [100%] 12:22:20 22 passed in 457.83s (0:07:37) 12:22:20 tests121: FAIL ✖ in 4 minutes 39.3 seconds 12:22:20 tests_tapi: OK ✔ in 32 minutes 8.84 seconds 12:22:20 tests221: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 12:22:27 tests221: freeze> python -m pip freeze --all 12:22:28 tests221: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 12:22:28 tests221: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 2.2.1 12:22:28 using environment variables from ./karaf221.env 12:22:28 pytest -q transportpce_tests/2.2.1/test01_portmapping.py 12:23:04 ................................... [100%] 12:23:44 35 passed in 76.08s (0:01:16) 12:23:44 pytest -q transportpce_tests/2.2.1/test02_topo_portmapping.py 12:24:15 ...... [100%] 12:24:28 6 passed in 44.29s 12:24:28 pytest -q transportpce_tests/2.2.1/test03_topology.py 12:25:11 ............................................ [100%] 12:26:46 44 passed in 137.33s (0:02:17) 12:26:46 pytest -q transportpce_tests/2.2.1/test04_otn_topology.py 12:27:21 ............ [100%] 12:27:45 12 passed in 59.00s 12:27:45 pytest -q transportpce_tests/2.2.1/test05_flex_grid.py 12:28:10 ................ [100%] 12:29:39 16 passed in 113.55s (0:01:53) 12:29:39 pytest -q transportpce_tests/2.2.1/test06_renderer_service_path_nominal.py 12:30:08 ............................... [100%] 12:30:15 31 passed in 35.34s 12:30:15 pytest -q transportpce_tests/2.2.1/test07_otn_renderer.py 12:30:50 .......................... [100%] 12:31:45 26 passed in 90.23s (0:01:30) 12:31:45 pytest -q transportpce_tests/2.2.1/test08_otn_sh_renderer.py 12:32:21 ...................... [100%] 12:33:24 22 passed in 98.24s (0:01:38) 12:33:24 pytest -q transportpce_tests/2.2.1/test09_olm.py 12:34:04 ........................................ [100%] 12:36:26 40 passed in 181.99s (0:03:01) 12:36:26 pytest -q transportpce_tests/2.2.1/test11_otn_end2end.py 12:37:09 ........................................................................ [ 74%] 12:42:46 ......................... [100%] 12:44:38 97 passed in 491.52s (0:08:11) 12:44:38 pytest -q transportpce_tests/2.2.1/test12_end2end.py 12:45:18 ...................................................... [100%] 12:52:06 54 passed in 447.57s (0:07:27) 12:52:06 pytest -q transportpce_tests/2.2.1/test14_otn_switch_end2end.py 12:53:00 ........................................................................ [ 71%] 12:58:09 ............................. [100%] 13:00:18 101 passed in 491.87s (0:08:11) 13:00:18 pytest -q transportpce_tests/2.2.1/test15_otn_end2end_with_intermediate_switch.py 13:01:12 ........................................................................ [ 67%] 13:06:59 ................................... [100%] 13:10:20 107 passed in 601.62s (0:10:01) 13:10:20 pytest -q transportpce_tests/2.2.1/test16_freq_end2end.py 13:11:02 ............................................. [100%] 13:13:40 45 passed in 199.61s (0:03:19) 13:13:40 tests221: OK ✔ in 51 minutes 19.86 seconds 13:13:40 tests_hybrid: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 13:13:47 tests_hybrid: freeze> python -m pip freeze --all 13:13:47 tests_hybrid: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 13:13:47 tests_hybrid: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh hybrid 13:13:47 using environment variables from ./karaf221.env 13:13:47 pytest -q transportpce_tests/hybrid/test01_device_change_notifications.py 13:14:27 ................................................... [100%] 13:16:14 51 passed in 146.30s (0:02:26) 13:16:14 pytest -q transportpce_tests/hybrid/test02_B100G_end2end.py 13:16:56 ........................................................................ [ 66%] 13:21:17 ..................................... [100%] 13:23:23 109 passed in 428.55s (0:07:08) 13:23:23 pytest -q transportpce_tests/hybrid/test03_autonomous_reroute.py 13:24:09 ..................................................... [100%] 13:27:42 53 passed in 259.28s (0:04:19) 13:27:42 buildcontroller: OK (106.60=setup[8.09]+cmd[98.51] seconds) 13:27:42 sims: OK (10.60=setup[7.27]+cmd[3.33] seconds) 13:27:42 build_karaf_tests121: OK (58.74=setup[7.40]+cmd[51.33] seconds) 13:27:42 testsPCE: OK (301.78=setup[59.55]+cmd[242.23] seconds) 13:27:42 tests121: FAIL code 1 (279.30=setup[7.16]+cmd[272.14] seconds) 13:27:42 build_karaf_tests221: OK (58.00=setup[7.38]+cmd[50.62] seconds) 13:27:42 tests_tapi: OK (1928.84=setup[7.10]+cmd[1921.74] seconds) 13:27:42 tests221: OK (3079.86=setup[7.46]+cmd[3072.40] seconds) 13:27:42 build_karaf_tests71: OK (57.99=setup[7.42]+cmd[50.57] seconds) 13:27:42 tests71: OK (464.85=setup[6.92]+cmd[457.93] seconds) 13:27:42 build_karaf_tests200: OK (58.72=setup[7.37]+cmd[51.35] seconds) 13:27:42 tests200: OK (217.76=setup[7.18]+cmd[210.57] seconds) 13:27:42 tests_hybrid: OK (842.38=setup[7.33]+cmd[835.05] seconds) 13:27:42 buildlighty: OK (43.22=setup[7.37]+cmd[35.85] seconds) 13:27:42 docs: OK (29.58=setup[26.59]+cmd[2.99] seconds) 13:27:42 docs-linkcheck: OK (32.15=setup[26.83]+cmd[5.32] seconds) 13:27:42 checkbashisms: OK (3.14=setup[1.84]+cmd[0.01,0.09,1.20] seconds) 13:27:42 pre-commit: OK (49.30=setup[2.91]+cmd[0.00,0.00,38.35,8.04] seconds) 13:27:42 pylint: OK (31.56=setup[3.84]+cmd[27.71] seconds) 13:27:42 evaluation failed :( (6318.26 seconds) 13:27:42 + tox_status=1 13:27:42 + echo '---> Completed tox runs' 13:27:42 ---> Completed tox runs 13:27:42 + for i in .tox/*/log 13:27:42 ++ echo .tox/build_karaf_tests121/log 13:27:42 ++ awk -F/ '{print $2}' 13:27:42 + tox_env=build_karaf_tests121 13:27:42 + cp -r .tox/build_karaf_tests121/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests121 13:27:42 + for i in .tox/*/log 13:27:42 ++ echo .tox/build_karaf_tests200/log 13:27:42 ++ awk -F/ '{print $2}' 13:27:42 + tox_env=build_karaf_tests200 13:27:42 + cp -r .tox/build_karaf_tests200/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests200 13:27:42 + for i in .tox/*/log 13:27:42 ++ echo .tox/build_karaf_tests221/log 13:27:42 ++ awk -F/ '{print $2}' 13:27:42 + tox_env=build_karaf_tests221 13:27:42 + cp -r .tox/build_karaf_tests221/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests221 13:27:42 + for i in .tox/*/log 13:27:42 ++ echo .tox/build_karaf_tests71/log 13:27:42 ++ awk -F/ '{print $2}' 13:27:42 + tox_env=build_karaf_tests71 13:27:42 + cp -r .tox/build_karaf_tests71/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests71 13:27:42 + for i in .tox/*/log 13:27:42 ++ echo .tox/buildcontroller/log 13:27:42 ++ awk -F/ '{print $2}' 13:27:42 + tox_env=buildcontroller 13:27:42 + cp -r .tox/buildcontroller/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/buildcontroller 13:27:42 + for i in .tox/*/log 13:27:42 ++ echo .tox/buildlighty/log 13:27:42 ++ awk -F/ '{print $2}' 13:27:42 + tox_env=buildlighty 13:27:42 + cp -r .tox/buildlighty/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/buildlighty 13:27:42 + for i in .tox/*/log 13:27:42 ++ echo .tox/checkbashisms/log 13:27:42 ++ awk -F/ '{print $2}' 13:27:42 + tox_env=checkbashisms 13:27:42 + cp -r .tox/checkbashisms/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/checkbashisms 13:27:42 + for i in .tox/*/log 13:27:42 ++ awk -F/ '{print $2}' 13:27:42 ++ echo .tox/docs-linkcheck/log 13:27:42 + tox_env=docs-linkcheck 13:27:42 + cp -r .tox/docs-linkcheck/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/docs-linkcheck 13:27:42 + for i in .tox/*/log 13:27:42 ++ echo .tox/docs/log 13:27:42 ++ awk -F/ '{print $2}' 13:27:42 + tox_env=docs 13:27:42 + cp -r .tox/docs/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/docs 13:27:42 + for i in .tox/*/log 13:27:42 ++ echo .tox/pre-commit/log 13:27:42 ++ awk -F/ '{print $2}' 13:27:42 + tox_env=pre-commit 13:27:42 + cp -r .tox/pre-commit/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/pre-commit 13:27:42 + for i in .tox/*/log 13:27:42 ++ echo .tox/pylint/log 13:27:42 ++ awk -F/ '{print $2}' 13:27:42 + tox_env=pylint 13:27:42 + cp -r .tox/pylint/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/pylint 13:27:42 + for i in .tox/*/log 13:27:42 ++ echo .tox/sims/log 13:27:42 ++ awk -F/ '{print $2}' 13:27:42 + tox_env=sims 13:27:42 + cp -r .tox/sims/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/sims 13:27:42 + for i in .tox/*/log 13:27:42 ++ echo .tox/tests121/log 13:27:42 ++ awk -F/ '{print $2}' 13:27:42 + tox_env=tests121 13:27:42 + cp -r .tox/tests121/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests121 13:27:42 + for i in .tox/*/log 13:27:42 ++ echo .tox/tests200/log 13:27:42 ++ awk -F/ '{print $2}' 13:27:42 + tox_env=tests200 13:27:42 + cp -r .tox/tests200/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests200 13:27:42 + for i in .tox/*/log 13:27:42 ++ echo .tox/tests221/log 13:27:42 ++ awk -F/ '{print $2}' 13:27:42 + tox_env=tests221 13:27:42 + cp -r .tox/tests221/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests221 13:27:43 + for i in .tox/*/log 13:27:43 ++ echo .tox/tests71/log 13:27:43 ++ awk -F/ '{print $2}' 13:27:43 + tox_env=tests71 13:27:43 + cp -r .tox/tests71/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests71 13:27:43 + for i in .tox/*/log 13:27:43 ++ echo .tox/testsPCE/log 13:27:43 ++ awk -F/ '{print $2}' 13:27:43 + tox_env=testsPCE 13:27:43 + cp -r .tox/testsPCE/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/testsPCE 13:27:43 + for i in .tox/*/log 13:27:43 ++ echo .tox/tests_hybrid/log 13:27:43 ++ awk -F/ '{print $2}' 13:27:43 + tox_env=tests_hybrid 13:27:43 + cp -r .tox/tests_hybrid/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests_hybrid 13:27:43 + for i in .tox/*/log 13:27:43 ++ echo .tox/tests_tapi/log 13:27:43 ++ awk -F/ '{print $2}' 13:27:43 + tox_env=tests_tapi 13:27:43 + cp -r .tox/tests_tapi/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests_tapi 13:27:43 + DOC_DIR=docs/_build/html 13:27:43 + [[ -d docs/_build/html ]] 13:27:43 + echo '---> Archiving generated docs' 13:27:43 ---> Archiving generated docs 13:27:43 + mv docs/_build/html /w/workspace/transportpce-tox-verify-transportpce-master/archives/docs 13:27:43 + echo '---> tox-run.sh ends' 13:27:43 ---> tox-run.sh ends 13:27:43 + test 1 -eq 0 13:27:43 + exit 1 13:27:43 ++ '[' 1 = 1 ']' 13:27:43 ++ '[' -x /usr/bin/clear_console ']' 13:27:43 ++ /usr/bin/clear_console -q 13:27:43 Build step 'Execute shell' marked build as failure 13:27:43 $ ssh-agent -k 13:27:43 unset SSH_AUTH_SOCK; 13:27:43 unset SSH_AGENT_PID; 13:27:43 echo Agent pid 1565 killed; 13:27:43 [ssh-agent] Stopped. 13:27:43 [PostBuildScript] - [INFO] Executing post build scripts. 13:27:43 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins9295202181942976892.sh 13:27:43 ---> sysstat.sh 13:27:44 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins17695574935189073213.sh 13:27:44 ---> package-listing.sh 13:27:44 ++ facter osfamily 13:27:44 ++ tr '[:upper:]' '[:lower:]' 13:27:44 + OS_FAMILY=debian 13:27:44 + workspace=/w/workspace/transportpce-tox-verify-transportpce-master 13:27:44 + START_PACKAGES=/tmp/packages_start.txt 13:27:44 + END_PACKAGES=/tmp/packages_end.txt 13:27:44 + DIFF_PACKAGES=/tmp/packages_diff.txt 13:27:44 + PACKAGES=/tmp/packages_start.txt 13:27:44 + '[' /w/workspace/transportpce-tox-verify-transportpce-master ']' 13:27:44 + PACKAGES=/tmp/packages_end.txt 13:27:44 + case "${OS_FAMILY}" in 13:27:44 + dpkg -l 13:27:44 + grep '^ii' 13:27:44 + '[' -f /tmp/packages_start.txt ']' 13:27:44 + '[' -f /tmp/packages_end.txt ']' 13:27:44 + diff /tmp/packages_start.txt /tmp/packages_end.txt 13:27:44 + '[' /w/workspace/transportpce-tox-verify-transportpce-master ']' 13:27:44 + mkdir -p /w/workspace/transportpce-tox-verify-transportpce-master/archives/ 13:27:44 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/transportpce-tox-verify-transportpce-master/archives/ 13:27:44 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins8017200010807127480.sh 13:27:44 ---> capture-instance-metadata.sh 13:27:44 Setup pyenv: 13:27:44 system 13:27:44 3.8.20 13:27:44 3.9.20 13:27:44 3.10.15 13:27:44 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 13:27:44 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-xUmd from file:/tmp/.os_lf_venv 13:27:44 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 13:27:44 lf-activate-venv(): INFO: Attempting to install with network-safe options... 13:27:46 lf-activate-venv(): INFO: Base packages installed successfully 13:27:46 lf-activate-venv(): INFO: Installing additional packages: lftools 13:27:56 lf-activate-venv(): INFO: Adding /tmp/venv-xUmd/bin to PATH 13:27:56 INFO: Running in OpenStack, capturing instance metadata 13:27:56 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins10598463776841057599.sh 13:27:56 provisioning config files... 13:27:57 Could not find credentials [logs] for transportpce-tox-verify-transportpce-master #4512 13:27:57 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/transportpce-tox-verify-transportpce-master@tmp/config8022846876219371798tmp 13:27:57 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[odl-logs-s3-cloudfront-index] 13:27:57 Run condition [Regular expression match] enabling perform for step [Provide Configuration files] 13:27:57 provisioning config files... 13:27:57 copy managed file [jenkins-s3-log-ship] to file:/home/jenkins/.aws/credentials 13:27:57 [EnvInject] - Injecting environment variables from a build step. 13:27:57 [EnvInject] - Injecting as environment variables the properties content 13:27:57 SERVER_ID=logs 13:27:57 13:27:57 [EnvInject] - Variables injected successfully. 13:27:57 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins17180525685568637649.sh 13:27:57 ---> create-netrc.sh 13:27:57 WARN: Log server credential not found. 13:27:57 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins1956005061769962591.sh 13:27:57 ---> python-tools-install.sh 13:27:57 Setup pyenv: 13:27:57 system 13:27:57 3.8.20 13:27:57 3.9.20 13:27:57 3.10.15 13:27:57 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 13:27:57 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-xUmd from file:/tmp/.os_lf_venv 13:27:57 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 13:27:57 lf-activate-venv(): INFO: Attempting to install with network-safe options... 13:27:59 lf-activate-venv(): INFO: Base packages installed successfully 13:27:59 lf-activate-venv(): INFO: Installing additional packages: lftools 13:28:11 lf-activate-venv(): INFO: Adding /tmp/venv-xUmd/bin to PATH 13:28:11 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins13604166651745452845.sh 13:28:11 ---> sudo-logs.sh 13:28:11 Archiving 'sudo' log.. 13:28:12 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins8787258230479251484.sh 13:28:12 ---> job-cost.sh 13:28:12 INFO: Activating Python virtual environment... 13:28:12 Setup pyenv: 13:28:12 system 13:28:12 3.8.20 13:28:12 3.9.20 13:28:12 3.10.15 13:28:12 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 13:28:12 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-xUmd from file:/tmp/.os_lf_venv 13:28:12 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 13:28:12 lf-activate-venv(): INFO: Attempting to install with network-safe options... 13:28:14 lf-activate-venv(): INFO: Base packages installed successfully 13:28:14 lf-activate-venv(): INFO: Installing additional packages: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 13:28:19 lf-activate-venv(): INFO: Adding /tmp/venv-xUmd/bin to PATH 13:28:19 INFO: No stack-cost file found 13:28:19 INFO: Instance uptime: 6478s 13:28:19 INFO: Fetching instance metadata (attempt 1 of 3)... 13:28:19 DEBUG: URL: http://169.254.169.254/latest/meta-data/instance-type 13:28:19 INFO: Successfully fetched instance metadata 13:28:19 INFO: Instance type: v3-standard-4 13:28:19 INFO: Retrieving pricing info for: v3-standard-4 13:28:19 INFO: Fetching Vexxhost pricing API (attempt 1 of 3)... 13:28:19 DEBUG: URL: https://pricing.vexxhost.net/v1/pricing/v3-standard-4/cost?seconds=6478 13:28:20 INFO: Successfully fetched Vexxhost pricing API 13:28:20 INFO: Retrieved cost: 0.22 13:28:20 INFO: Retrieved resource: v3-standard-4 13:28:20 INFO: Creating archive directory: /w/workspace/transportpce-tox-verify-transportpce-master/archives/cost 13:28:20 INFO: Archiving costs to: /w/workspace/transportpce-tox-verify-transportpce-master/archives/cost.csv 13:28:20 INFO: Successfully archived job cost data 13:28:20 DEBUG: Cost data: transportpce-tox-verify-transportpce-master,4512,2026-03-05 13:28:20,v3-standard-4,6478,0.22,0.00,FAILURE 13:28:20 [transportpce-tox-verify-transportpce-master] $ /bin/bash -l /tmp/jenkins12764517365452925793.sh 13:28:20 ---> logs-deploy.sh 13:28:20 Setup pyenv: 13:28:20 system 13:28:20 3.8.20 13:28:20 3.9.20 13:28:20 3.10.15 13:28:20 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 13:28:20 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-xUmd from file:/tmp/.os_lf_venv 13:28:20 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 13:28:20 lf-activate-venv(): INFO: Attempting to install with network-safe options... 13:28:22 lf-activate-venv(): INFO: Base packages installed successfully 13:28:22 lf-activate-venv(): INFO: Installing additional packages: lftools urllib3~=1.26.15 13:28:31 lf-activate-venv(): INFO: Adding /tmp/venv-xUmd/bin to PATH 13:28:31 WARNING: Nexus logging server not set 13:28:31 INFO: S3 path logs/releng/vex-yul-odl-jenkins-1/transportpce-tox-verify-transportpce-master/4512/ 13:28:31 INFO: archiving logs to S3 13:28:31 /tmp/venv-xUmd/lib/python3.11/site-packages/requests/__init__.py:113: RequestsDependencyWarning: urllib3 (1.26.20) or chardet (7.0.1)/charset_normalizer (3.4.4) doesn't match a supported version! 13:28:31 warnings.warn( 13:28:32 ---> uname -a: 13:28:32 Linux prd-ubuntu2204-docker-4c-16g-58912 5.15.0-171-generic #181-Ubuntu SMP Fri Feb 6 22:44:50 UTC 2026 x86_64 x86_64 x86_64 GNU/Linux 13:28:32 13:28:32 13:28:32 ---> lscpu: 13:28:32 Architecture: x86_64 13:28:32 CPU op-mode(s): 32-bit, 64-bit 13:28:32 Address sizes: 40 bits physical, 48 bits virtual 13:28:32 Byte Order: Little Endian 13:28:32 CPU(s): 4 13:28:32 On-line CPU(s) list: 0-3 13:28:32 Vendor ID: AuthenticAMD 13:28:32 Model name: AMD EPYC-Rome Processor 13:28:32 CPU family: 23 13:28:32 Model: 49 13:28:32 Thread(s) per core: 1 13:28:32 Core(s) per socket: 1 13:28:32 Socket(s): 4 13:28:32 Stepping: 0 13:28:32 BogoMIPS: 5599.99 13:28:32 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr wbnoinvd arat npt nrip_save umip rdpid arch_capabilities 13:28:32 Virtualization: AMD-V 13:28:32 Hypervisor vendor: KVM 13:28:32 Virtualization type: full 13:28:32 L1d cache: 128 KiB (4 instances) 13:28:32 L1i cache: 128 KiB (4 instances) 13:28:32 L2 cache: 2 MiB (4 instances) 13:28:32 L3 cache: 64 MiB (4 instances) 13:28:32 NUMA node(s): 1 13:28:32 NUMA node0 CPU(s): 0-3 13:28:32 Vulnerability Gather data sampling: Not affected 13:28:32 Vulnerability Indirect target selection: Not affected 13:28:32 Vulnerability Itlb multihit: Not affected 13:28:32 Vulnerability L1tf: Not affected 13:28:32 Vulnerability Mds: Not affected 13:28:32 Vulnerability Meltdown: Not affected 13:28:32 Vulnerability Mmio stale data: Not affected 13:28:32 Vulnerability Reg file data sampling: Not affected 13:28:32 Vulnerability Retbleed: Mitigation; untrained return thunk; SMT disabled 13:28:32 Vulnerability Spec rstack overflow: Mitigation; SMT disabled 13:28:32 Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp 13:28:32 Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization 13:28:32 Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected 13:28:32 Vulnerability Srbds: Not affected 13:28:32 Vulnerability Tsa: Not affected 13:28:32 Vulnerability Tsx async abort: Not affected 13:28:32 Vulnerability Vmscape: Not affected 13:28:32 13:28:32 13:28:32 ---> nproc: 13:28:32 4 13:28:32 13:28:32 13:28:32 ---> df -h: 13:28:32 Filesystem Size Used Avail Use% Mounted on 13:28:32 tmpfs 1.6G 1.1M 1.6G 1% /run 13:28:32 /dev/vda1 78G 18G 61G 23% / 13:28:32 tmpfs 7.9G 0 7.9G 0% /dev/shm 13:28:32 tmpfs 5.0M 0 5.0M 0% /run/lock 13:28:32 /dev/vda15 105M 6.1M 99M 6% /boot/efi 13:28:32 tmpfs 1.6G 4.0K 1.6G 1% /run/user/1001 13:28:32 13:28:32 13:28:32 ---> free -m: 13:28:32 total used free shared buff/cache available 13:28:32 Mem: 15989 711 10660 3 4617 14927 13:28:32 Swap: 1023 1 1022 13:28:32 13:28:32 13:28:32 ---> ip addr: 13:28:32 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 13:28:32 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 13:28:32 inet 127.0.0.1/8 scope host lo 13:28:32 valid_lft forever preferred_lft forever 13:28:32 inet6 ::1/128 scope host 13:28:32 valid_lft forever preferred_lft forever 13:28:32 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 13:28:32 link/ether fa:16:3e:a7:b7:a9 brd ff:ff:ff:ff:ff:ff 13:28:32 altname enp0s3 13:28:32 inet 10.30.170.186/23 metric 100 brd 10.30.171.255 scope global dynamic ens3 13:28:32 valid_lft 79916sec preferred_lft 79916sec 13:28:32 inet6 fe80::f816:3eff:fea7:b7a9/64 scope link 13:28:32 valid_lft forever preferred_lft forever 13:28:32 3: docker0: mtu 1458 qdisc noqueue state DOWN group default 13:28:32 link/ether 9e:85:fc:1b:a7:37 brd ff:ff:ff:ff:ff:ff 13:28:32 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 13:28:32 valid_lft forever preferred_lft forever 13:28:32 13:28:32 13:28:32 ---> sar -b -r -n DEV: 13:28:32 Linux 5.15.0-171-generic (prd-ubuntu2204-docker-4c-16g-58912) 03/05/26 _x86_64_ (4 CPU) 13:28:32 13:28:32 11:40:31 LINUX RESTART (4 CPU) 13:28:32 13:28:32 11:50:00 tps rtps wtps dtps bread/s bwrtn/s bdscd/s 13:28:32 12:00:01 37.52 3.50 31.84 2.18 126.17 3612.84 4034.54 13:28:32 12:10:20 4.60 0.05 4.37 0.18 1.47 113.92 537.52 13:28:32 12:20:00 7.39 0.05 6.95 0.39 5.20 238.74 1285.50 13:28:32 12:30:01 19.30 2.06 16.42 0.82 97.76 720.58 820.92 13:28:32 12:40:20 12.62 0.01 11.93 0.67 0.61 407.63 208.30 13:28:32 12:50:00 5.65 0.00 5.40 0.24 0.46 151.50 83.72 13:28:32 13:00:01 5.84 0.01 5.64 0.19 1.06 163.11 80.27 13:28:32 13:10:20 4.96 0.00 4.78 0.18 0.36 148.91 64.30 13:28:32 13:20:00 11.54 0.21 10.82 0.51 3.02 679.19 238.07 13:28:32 Average: 12.16 0.65 10.91 0.60 26.26 692.45 815.10 13:28:32 13:28:32 11:50:00 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 13:28:32 12:00:01 6887620 10313784 5613596 34.29 258236 3139380 6839556 39.26 1960084 6938476 112 13:28:32 12:10:20 6593176 10027396 5900088 36.04 259068 3146460 7018748 40.29 1975476 7214708 140 13:28:32 12:20:00 6240340 9693016 6233508 38.07 260452 3163560 6925460 39.75 1983680 7551280 732 13:28:32 12:30:01 9953624 13578096 2350896 14.36 266608 3329232 3477316 19.96 2016088 3830508 12244 13:28:32 12:40:20 7801948 11481444 4446284 27.16 268800 3382048 5175968 29.71 2020248 5973260 96 13:28:32 12:50:00 7753376 11451620 4475988 27.34 269416 3400184 5198032 29.84 2021452 6012148 148 13:28:32 13:00:01 6031368 9750876 6175940 37.72 270176 3420692 6883548 39.51 2023864 7732844 88 13:28:32 13:10:20 11538112 15300076 628556 3.84 271084 3462428 1328352 7.62 2028648 2221516 20972 13:28:32 13:20:00 7476692 11366860 4560624 27.85 275244 3581140 5152356 29.57 2066360 6243204 292 13:28:32 Average: 7808473 11440352 4487276 27.41 266565 3336125 5333260 30.61 2010656 5968660 3869 13:28:32 13:28:32 11:50:00 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 13:28:32 12:00:01 lo 19.32 19.32 15.51 15.51 0.00 0.00 0.00 0.00 13:28:32 12:00:01 ens3 2.44 2.36 0.65 2.38 0.00 0.00 0.00 0.00 13:28:32 12:00:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:28:32 12:10:20 lo 10.35 10.35 5.19 5.19 0.00 0.00 0.00 0.00 13:28:32 12:10:20 ens3 0.57 0.46 0.16 0.12 0.00 0.00 0.00 0.00 13:28:32 12:10:20 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:28:32 12:20:00 lo 12.39 12.39 7.34 7.34 0.00 0.00 0.00 0.00 13:28:32 12:20:00 ens3 0.56 0.52 0.13 0.10 0.00 0.00 0.00 0.00 13:28:32 12:20:00 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:28:32 12:30:01 lo 10.34 10.34 5.00 5.00 0.00 0.00 0.00 0.00 13:28:32 12:30:01 ens3 0.86 0.81 0.23 0.19 0.00 0.00 0.00 0.00 13:28:32 12:30:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:28:32 12:40:20 lo 20.62 20.62 11.01 11.01 0.00 0.00 0.00 0.00 13:28:32 12:40:20 ens3 1.12 0.74 0.29 0.21 0.00 0.00 0.00 0.00 13:28:32 12:40:20 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:28:32 12:50:00 lo 24.14 24.14 9.04 9.04 0.00 0.00 0.00 0.00 13:28:32 12:50:00 ens3 0.92 0.57 0.30 0.21 0.00 0.00 0.00 0.00 13:28:32 12:50:00 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:28:32 13:00:01 lo 25.05 25.05 11.29 11.29 0.00 0.00 0.00 0.00 13:28:32 13:00:01 ens3 1.65 0.52 0.22 0.14 0.00 0.00 0.00 0.00 13:28:32 13:00:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:28:32 13:10:20 lo 17.48 17.48 10.28 10.28 0.00 0.00 0.00 0.00 13:28:32 13:10:20 ens3 0.83 0.63 0.23 0.17 0.00 0.00 0.00 0.00 13:28:32 13:10:20 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:28:32 13:20:00 lo 18.79 18.79 10.39 10.39 0.00 0.00 0.00 0.00 13:28:32 13:20:00 ens3 0.95 0.96 0.25 0.22 0.00 0.00 0.00 0.00 13:28:32 13:20:00 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:28:32 Average: lo 17.59 17.59 9.45 9.45 0.00 0.00 0.00 0.00 13:28:32 Average: ens3 1.10 0.84 0.27 0.42 0.00 0.00 0.00 0.00 13:28:32 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:28:32 13:28:32 13:28:32 ---> sar -P ALL: 13:28:32 Linux 5.15.0-171-generic (prd-ubuntu2204-docker-4c-16g-58912) 03/05/26 _x86_64_ (4 CPU) 13:28:32 13:28:32 11:40:31 LINUX RESTART (4 CPU) 13:28:32 13:28:32 11:50:00 CPU %user %nice %system %iowait %steal %idle 13:28:32 12:00:01 all 34.71 0.00 1.46 0.06 0.10 63.67 13:28:32 12:00:01 0 34.57 0.00 1.45 0.03 0.10 63.85 13:28:32 12:00:01 1 35.39 0.00 1.44 0.12 0.10 62.95 13:28:32 12:00:01 2 34.13 0.00 1.51 0.02 0.10 64.23 13:28:32 12:00:01 3 34.76 0.00 1.42 0.08 0.10 63.64 13:28:32 12:10:20 all 8.27 0.00 0.41 0.02 0.08 91.23 13:28:32 12:10:20 0 8.32 0.00 0.39 0.00 0.08 91.22 13:28:32 12:10:20 1 8.30 0.00 0.47 0.03 0.09 91.12 13:28:32 12:10:20 2 8.77 0.00 0.43 0.01 0.08 90.71 13:28:32 12:10:20 3 7.68 0.00 0.36 0.03 0.07 91.86 13:28:32 12:20:00 all 16.83 0.00 0.61 0.02 0.09 82.44 13:28:32 12:20:00 0 16.74 0.00 0.68 0.01 0.08 82.48 13:28:32 12:20:00 1 16.29 0.00 0.57 0.04 0.10 83.00 13:28:32 12:20:00 2 16.78 0.00 0.56 0.03 0.10 82.53 13:28:32 12:20:00 3 17.51 0.00 0.64 0.02 0.08 81.75 13:28:32 12:30:01 all 23.41 0.00 0.86 0.07 0.10 75.56 13:28:32 12:30:01 0 23.77 0.00 0.88 0.04 0.10 75.21 13:28:32 12:30:01 1 24.54 0.00 0.88 0.08 0.09 74.41 13:28:32 12:30:01 2 22.84 0.00 0.90 0.05 0.10 76.11 13:28:32 12:30:01 3 22.49 0.00 0.79 0.12 0.09 76.51 13:28:32 12:40:20 all 21.11 0.00 0.75 0.05 0.09 78.00 13:28:32 12:40:20 0 20.90 0.00 0.73 0.06 0.09 78.21 13:28:32 12:40:20 1 21.44 0.00 0.77 0.02 0.09 77.68 13:28:32 12:40:20 2 20.63 0.00 0.77 0.04 0.09 78.48 13:28:32 12:40:20 3 21.47 0.00 0.74 0.08 0.08 77.62 13:28:32 12:50:00 all 8.38 0.00 0.39 0.02 0.08 91.14 13:28:32 12:50:00 0 8.21 0.00 0.30 0.01 0.07 91.42 13:28:32 12:50:00 1 8.63 0.00 0.47 0.04 0.08 90.77 13:28:32 12:50:00 2 8.33 0.00 0.42 0.03 0.08 91.14 13:28:32 12:50:00 3 8.34 0.00 0.35 0.01 0.07 91.23 13:28:32 13:00:01 all 9.87 0.00 0.40 0.02 0.08 89.63 13:28:32 13:00:01 0 9.88 0.00 0.38 0.01 0.08 89.65 13:28:32 13:00:01 1 9.89 0.00 0.43 0.04 0.09 89.56 13:28:32 13:00:01 2 9.76 0.00 0.44 0.00 0.08 89.71 13:28:32 13:00:01 3 9.96 0.00 0.34 0.02 0.08 89.60 13:28:32 13:10:20 all 9.39 0.00 0.35 0.02 0.09 90.16 13:28:32 13:10:20 0 9.34 0.00 0.38 0.03 0.08 90.16 13:28:32 13:10:20 1 9.25 0.00 0.31 0.01 0.09 90.33 13:28:32 13:10:20 2 9.53 0.00 0.39 0.01 0.08 89.99 13:28:32 13:10:20 3 9.43 0.00 0.32 0.01 0.09 90.15 13:28:32 13:20:00 all 19.40 0.00 0.64 0.05 0.09 79.83 13:28:32 13:20:00 0 19.23 0.00 0.56 0.01 0.10 80.11 13:28:32 13:20:00 1 18.71 0.00 0.54 0.02 0.09 80.65 13:28:32 13:20:00 2 19.43 0.00 0.73 0.10 0.09 79.64 13:28:32 13:20:00 3 20.21 0.00 0.73 0.05 0.09 78.92 13:28:32 Average: all 16.81 0.00 0.65 0.04 0.09 82.41 13:28:32 Average: 0 16.77 0.00 0.64 0.02 0.09 82.48 13:28:32 Average: 1 16.94 0.00 0.65 0.04 0.09 82.27 13:28:32 Average: 2 16.68 0.00 0.68 0.03 0.09 82.51 13:28:32 Average: 3 16.86 0.00 0.63 0.05 0.08 82.37 13:28:32 13:28:32 13:28:32