20:48:12 Triggered by Gerrit: https://git.opendaylight.org/gerrit/c/transportpce/+/120829 20:48:12 Running as SYSTEM 20:48:12 [EnvInject] - Loading node environment variables. 20:48:12 Building remotely on prd-ubuntu2204-docker-4c-16g-81965 (ubuntu2204-docker-4c-16g) in workspace /w/workspace/transportpce-tox-verify-transportpce-master 20:48:12 [ssh-agent] Looking for ssh-agent implementation... 20:48:12 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 20:48:12 $ ssh-agent 20:48:12 SSH_AUTH_SOCK=/tmp/ssh-XXXXXXUH61ZW/agent.1575 20:48:12 SSH_AGENT_PID=1577 20:48:12 [ssh-agent] Started. 20:48:12 Running ssh-add (command line suppressed) 20:48:12 Identity added: /w/workspace/transportpce-tox-verify-transportpce-master@tmp/private_key_9400424230164424168.key (/w/workspace/transportpce-tox-verify-transportpce-master@tmp/private_key_9400424230164424168.key) 20:48:12 [ssh-agent] Using credentials jenkins (jenkins-ssh) 20:48:13 The recommended git tool is: NONE 20:48:14 using credential jenkins-ssh 20:48:14 Wiping out workspace first. 20:48:14 Cloning the remote Git repository 20:48:14 Cloning repository git://devvexx.opendaylight.org/mirror/transportpce 20:48:15 > git init /w/workspace/transportpce-tox-verify-transportpce-master # timeout=10 20:48:15 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/transportpce 20:48:15 > git --version # timeout=10 20:48:15 > git --version # 'git version 2.34.1' 20:48:15 using GIT_SSH to set credentials jenkins-ssh 20:48:15 Verifying host key using known hosts file, will automatically accept unseen keys 20:48:15 > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/transportpce +refs/heads/*:refs/remotes/origin/* # timeout=10 20:48:19 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/transportpce # timeout=10 20:48:19 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 20:48:19 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/transportpce # timeout=10 20:48:19 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/transportpce 20:48:19 using GIT_SSH to set credentials jenkins-ssh 20:48:19 Verifying host key using known hosts file, will automatically accept unseen keys 20:48:19 > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/transportpce refs/changes/29/120829/8 # timeout=10 20:48:19 > git rev-parse 74c22cb8d7b022f7ffbdea1e6ab542f6bcf7f45d^{commit} # timeout=10 20:48:19 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script 20:48:19 Checking out Revision 74c22cb8d7b022f7ffbdea1e6ab542f6bcf7f45d (refs/changes/29/120829/8) 20:48:19 > git config core.sparsecheckout # timeout=10 20:48:19 > git checkout -f 74c22cb8d7b022f7ffbdea1e6ab542f6bcf7f45d # timeout=10 20:48:20 Commit message: "Support for openconfig 2.0" 20:48:20 > git rev-parse FETCH_HEAD^{commit} # timeout=10 20:48:20 > git rev-list --no-walk 509d781065379100eb9da8d0414bc0043a05ebc0 # timeout=10 20:48:20 > git remote # timeout=10 20:48:20 > git submodule init # timeout=10 20:48:20 > git submodule sync # timeout=10 20:48:20 > git config --get remote.origin.url # timeout=10 20:48:20 > git submodule init # timeout=10 20:48:20 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 20:48:20 ERROR: No submodules found. 20:48:23 provisioning config files... 20:48:23 copy managed file [npmrc] to file:/home/jenkins/.npmrc 20:48:23 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 20:48:23 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins1164273248434538554.sh 20:48:23 ---> python-tools-install.sh 20:48:23 Setup pyenv: 20:48:23 * system (set by /opt/pyenv/version) 20:48:23 * 3.8.20 (set by /opt/pyenv/version) 20:48:23 * 3.9.20 (set by /opt/pyenv/version) 20:48:23 3.10.15 20:48:23 3.11.10 20:48:28 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-dYDK 20:48:28 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 20:48:28 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 20:48:28 lf-activate-venv(): INFO: Attempting to install with network-safe options... 20:48:32 lf-activate-venv(): INFO: Base packages installed successfully 20:48:32 lf-activate-venv(): INFO: Installing additional packages: lftools 20:49:01 lf-activate-venv(): INFO: Adding /tmp/venv-dYDK/bin to PATH 20:49:01 Generating Requirements File 20:49:22 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. 20:49:22 httplib2 0.30.2 requires pyparsing<4,>=3.0.4, but you have pyparsing 2.4.7 which is incompatible. 20:49:22 Python 3.11.10 20:49:22 pip 26.0.1 from /tmp/venv-dYDK/lib/python3.11/site-packages/pip (python 3.11) 20:49:23 appdirs==1.4.4 20:49:23 argcomplete==3.6.3 20:49:23 aspy.yaml==1.3.0 20:49:23 attrs==25.4.0 20:49:23 autopage==0.6.0 20:49:23 beautifulsoup4==4.14.3 20:49:23 boto3==1.42.61 20:49:23 botocore==1.42.61 20:49:23 bs4==0.0.2 20:49:23 certifi==2026.2.25 20:49:23 cffi==2.0.0 20:49:23 cfgv==3.5.0 20:49:23 chardet==7.0.1 20:49:23 charset-normalizer==3.4.4 20:49:23 click==8.3.1 20:49:23 cliff==4.13.2 20:49:23 cmd2==3.4.0 20:49:23 cryptography==3.3.2 20:49:23 debtcollector==3.0.0 20:49:23 decorator==5.2.1 20:49:23 defusedxml==0.7.1 20:49:23 Deprecated==1.3.1 20:49:23 distlib==0.4.0 20:49:23 dnspython==2.8.0 20:49:23 docker==7.1.0 20:49:23 dogpile.cache==1.5.0 20:49:23 durationpy==0.10 20:49:23 email-validator==2.3.0 20:49:23 filelock==3.25.0 20:49:23 future==1.0.0 20:49:23 gitdb==4.0.12 20:49:23 GitPython==3.1.46 20:49:23 httplib2==0.30.2 20:49:23 identify==2.6.17 20:49:23 idna==3.11 20:49:23 importlib-resources==1.5.0 20:49:23 iso8601==2.1.0 20:49:23 Jinja2==3.1.6 20:49:23 jmespath==1.1.0 20:49:23 jsonpatch==1.33 20:49:23 jsonpointer==3.0.0 20:49:23 jsonschema==4.26.0 20:49:23 jsonschema-specifications==2025.9.1 20:49:23 keystoneauth1==5.13.1 20:49:23 kubernetes==35.0.0 20:49:23 lftools==0.37.22 20:49:23 lxml==6.0.2 20:49:23 markdown-it-py==4.0.0 20:49:23 MarkupSafe==3.0.3 20:49:23 mdurl==0.1.2 20:49:23 msgpack==1.1.2 20:49:23 multi_key_dict==2.0.3 20:49:23 munch==4.0.0 20:49:23 netaddr==1.3.0 20:49:23 niet==1.4.2 20:49:23 nodeenv==1.10.0 20:49:23 oauth2client==4.1.3 20:49:23 oauthlib==3.3.1 20:49:23 openstacksdk==4.10.0 20:49:23 os-service-types==1.8.2 20:49:23 osc-lib==4.4.0 20:49:23 oslo.config==10.3.0 20:49:23 oslo.context==6.3.0 20:49:23 oslo.i18n==6.7.2 20:49:23 oslo.log==8.1.0 20:49:23 oslo.serialization==5.9.1 20:49:23 oslo.utils==10.0.0 20:49:23 packaging==26.0 20:49:23 pbr==7.0.3 20:49:23 platformdirs==4.9.4 20:49:23 prettytable==3.17.0 20:49:23 psutil==7.2.2 20:49:23 pyasn1==0.6.2 20:49:23 pyasn1_modules==0.4.2 20:49:23 pycparser==3.0 20:49:23 pygerrit2==2.0.15 20:49:23 PyGithub==2.8.1 20:49:23 Pygments==2.19.2 20:49:23 PyJWT==2.11.0 20:49:23 PyNaCl==1.6.2 20:49:23 pyparsing==2.4.7 20:49:23 pyperclip==1.11.0 20:49:23 pyrsistent==0.20.0 20:49:23 python-cinderclient==9.9.0 20:49:23 python-dateutil==2.9.0.post0 20:49:23 python-discovery==1.1.0 20:49:23 python-heatclient==5.1.0 20:49:23 python-jenkins==1.8.3 20:49:23 python-keystoneclient==5.8.0 20:49:23 python-magnumclient==4.10.0 20:49:23 python-openstackclient==9.0.0 20:49:23 python-swiftclient==4.10.0 20:49:23 PyYAML==6.0.3 20:49:23 referencing==0.37.0 20:49:23 requests==2.32.5 20:49:23 requests-oauthlib==2.0.0 20:49:23 requestsexceptions==1.4.0 20:49:23 rfc3986==2.0.0 20:49:23 rich==14.3.3 20:49:23 rich-argparse==1.7.2 20:49:23 rpds-py==0.30.0 20:49:23 rsa==4.9.1 20:49:23 ruamel.yaml==0.19.1 20:49:23 ruamel.yaml.clib==0.2.15 20:49:23 s3transfer==0.16.0 20:49:23 simplejson==3.20.2 20:49:23 six==1.17.0 20:49:23 smmap==5.0.2 20:49:23 soupsieve==2.8.3 20:49:23 stevedore==5.7.0 20:49:23 tabulate==0.10.0 20:49:23 toml==0.10.2 20:49:23 tomlkit==0.14.0 20:49:23 tqdm==4.67.3 20:49:23 typing_extensions==4.15.0 20:49:23 urllib3==1.26.20 20:49:23 virtualenv==21.1.0 20:49:23 wcwidth==0.6.0 20:49:23 websocket-client==1.9.0 20:49:23 wrapt==2.1.1 20:49:23 xdg==6.0.0 20:49:23 xmltodict==1.0.4 20:49:23 yq==3.4.3 20:49:23 [EnvInject] - Injecting environment variables from a build step. 20:49:23 [EnvInject] - Injecting as environment variables the properties content 20:49:23 PYTHON=python3 20:49:23 20:49:23 [EnvInject] - Variables injected successfully. 20:49:23 [transportpce-tox-verify-transportpce-master] $ /bin/bash -l /tmp/jenkins11840408078294501813.sh 20:49:23 ---> tox-install.sh 20:49:23 + source /home/jenkins/lf-env.sh 20:49:23 + lf-activate-venv --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 20:49:23 ++ mktemp -d /tmp/venv-XXXX 20:49:23 + lf_venv=/tmp/venv-k2SX 20:49:23 + local venv_file=/tmp/.os_lf_venv 20:49:23 + local python=python3 20:49:23 + local options 20:49:23 + local set_path=true 20:49:23 + local install_args= 20:49:23 ++ getopt -o np:v: -l no-path,system-site-packages,python:,venv-file: -n lf-activate-venv -- --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 20:49:23 + options=' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 20:49:23 + eval set -- ' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 20:49:23 ++ set -- --venv-file /tmp/.toxenv -- tox virtualenv urllib3~=1.26.15 20:49:23 + true 20:49:23 + case $1 in 20:49:23 + venv_file=/tmp/.toxenv 20:49:23 + shift 2 20:49:23 + true 20:49:23 + case $1 in 20:49:23 + shift 20:49:23 + break 20:49:23 + case $python in 20:49:23 + local pkg_list= 20:49:23 + [[ -d /opt/pyenv ]] 20:49:23 + echo 'Setup pyenv:' 20:49:23 Setup pyenv: 20:49:23 + export PYENV_ROOT=/opt/pyenv 20:49:23 + PYENV_ROOT=/opt/pyenv 20:49:23 + export PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 20:49:23 + PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 20:49:23 + pyenv versions 20:49:23 system 20:49:23 3.8.20 20:49:23 3.9.20 20:49:23 3.10.15 20:49:23 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 20:49:23 + command -v pyenv 20:49:23 ++ pyenv init - --no-rehash 20:49:23 + eval 'PATH="$(bash --norc -ec '\''IFS=:; paths=($PATH); 20:49:23 for i in ${!paths[@]}; do 20:49:23 if [[ ${paths[i]} == "'\'''\''/opt/pyenv/shims'\'''\''" ]]; then unset '\''\'\'''\''paths[i]'\''\'\'''\''; 20:49:23 fi; done; 20:49:23 echo "${paths[*]}"'\'')" 20:49:23 export PATH="/opt/pyenv/shims:${PATH}" 20:49:23 export PYENV_SHELL=bash 20:49:23 source '\''/opt/pyenv/libexec/../completions/pyenv.bash'\'' 20:49:23 pyenv() { 20:49:23 local command 20:49:23 command="${1:-}" 20:49:23 if [ "$#" -gt 0 ]; then 20:49:23 shift 20:49:23 fi 20:49:23 20:49:23 case "$command" in 20:49:23 rehash|shell) 20:49:23 eval "$(pyenv "sh-$command" "$@")" 20:49:23 ;; 20:49:23 *) 20:49:23 command pyenv "$command" "$@" 20:49:23 ;; 20:49:23 esac 20:49:23 }' 20:49:23 +++ bash --norc -ec 'IFS=:; paths=($PATH); 20:49:23 for i in ${!paths[@]}; do 20:49:23 if [[ ${paths[i]} == "/opt/pyenv/shims" ]]; then unset '\''paths[i]'\''; 20:49:23 fi; done; 20:49:23 echo "${paths[*]}"' 20:49:23 ++ PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 20:49:23 ++ export PATH=/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 20:49:23 ++ PATH=/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 20:49:23 ++ export PYENV_SHELL=bash 20:49:23 ++ PYENV_SHELL=bash 20:49:23 ++ source /opt/pyenv/libexec/../completions/pyenv.bash 20:49:23 +++ complete -F _pyenv pyenv 20:49:23 ++ lf-pyver python3 20:49:23 ++ local py_version_xy=python3 20:49:23 ++ local py_version_xyz= 20:49:23 ++ pyenv versions 20:49:23 ++ sed 's/^[ *]* //' 20:49:23 ++ local command 20:49:23 ++ command=versions 20:49:23 ++ '[' 1 -gt 0 ']' 20:49:23 ++ shift 20:49:23 ++ case "$command" in 20:49:23 ++ command pyenv versions 20:49:23 ++ awk '{ print $1 }' 20:49:23 ++ grep -E '^[0-9.]*[0-9]$' 20:49:23 ++ [[ ! -s /tmp/.pyenv_versions ]] 20:49:23 +++ grep '^3' /tmp/.pyenv_versions 20:49:23 +++ sort -V 20:49:23 +++ tail -n 1 20:49:23 ++ py_version_xyz=3.11.10 20:49:23 ++ [[ -z 3.11.10 ]] 20:49:23 ++ echo 3.11.10 20:49:23 ++ return 0 20:49:23 + pyenv local 3.11.10 20:49:23 + local command 20:49:23 + command=local 20:49:23 + '[' 2 -gt 0 ']' 20:49:23 + shift 20:49:23 + case "$command" in 20:49:23 + command pyenv local 3.11.10 20:49:23 + for arg in "$@" 20:49:23 + case $arg in 20:49:23 + pkg_list+='tox ' 20:49:23 + for arg in "$@" 20:49:23 + case $arg in 20:49:23 + pkg_list+='virtualenv ' 20:49:23 + for arg in "$@" 20:49:23 + case $arg in 20:49:23 + pkg_list+='urllib3~=1.26.15 ' 20:49:23 + [[ -f /tmp/.toxenv ]] 20:49:23 + [[ ! -f /tmp/.toxenv ]] 20:49:23 + [[ -n '' ]] 20:49:23 + python3 -m venv /tmp/venv-k2SX 20:49:27 + echo 'lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-k2SX' 20:49:27 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-k2SX 20:49:27 + echo /tmp/venv-k2SX 20:49:27 + echo 'lf-activate-venv(): INFO: Save venv in file: /tmp/.toxenv' 20:49:27 lf-activate-venv(): INFO: Save venv in file: /tmp/.toxenv 20:49:27 + echo 'lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv)' 20:49:27 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 20:49:27 + local 'pip_opts=--upgrade --quiet' 20:49:27 + pip_opts='--upgrade --quiet --trusted-host pypi.org' 20:49:27 + pip_opts='--upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org' 20:49:27 + pip_opts='--upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org --trusted-host pypi.python.org' 20:49:27 + [[ -n '' ]] 20:49:27 + [[ -n '' ]] 20:49:27 + echo 'lf-activate-venv(): INFO: Attempting to install with network-safe options...' 20:49:27 lf-activate-venv(): INFO: Attempting to install with network-safe options... 20:49:27 + /tmp/venv-k2SX/bin/python3 -m pip install --upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org --trusted-host pypi.python.org pip 'setuptools<66' virtualenv 20:49:32 + echo 'lf-activate-venv(): INFO: Base packages installed successfully' 20:49:32 lf-activate-venv(): INFO: Base packages installed successfully 20:49:32 + [[ -z tox virtualenv urllib3~=1.26.15 ]] 20:49:32 + echo 'lf-activate-venv(): INFO: Installing additional packages: tox virtualenv urllib3~=1.26.15 ' 20:49:32 lf-activate-venv(): INFO: Installing additional packages: tox virtualenv urllib3~=1.26.15 20:49:32 + /tmp/venv-k2SX/bin/python3 -m pip install --upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org --trusted-host pypi.python.org --upgrade-strategy eager tox virtualenv urllib3~=1.26.15 20:49:34 + type python3 20:49:34 + true 20:49:34 + echo 'lf-activate-venv(): INFO: Adding /tmp/venv-k2SX/bin to PATH' 20:49:34 lf-activate-venv(): INFO: Adding /tmp/venv-k2SX/bin to PATH 20:49:34 + PATH=/tmp/venv-k2SX/bin:/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 20:49:34 + return 0 20:49:34 + python3 --version 20:49:34 Python 3.11.10 20:49:34 + python3 -m pip --version 20:49:34 pip 26.0.1 from /tmp/venv-k2SX/lib/python3.11/site-packages/pip (python 3.11) 20:49:34 + python3 -m pip freeze 20:49:34 cachetools==7.0.2 20:49:34 colorama==0.4.6 20:49:34 distlib==0.4.0 20:49:34 filelock==3.25.0 20:49:34 packaging==26.0 20:49:34 platformdirs==4.9.4 20:49:34 pluggy==1.6.0 20:49:34 pyproject-api==1.10.0 20:49:34 python-discovery==1.1.0 20:49:34 tox==4.47.3 20:49:34 urllib3==1.26.20 20:49:34 virtualenv==21.1.0 20:49:35 [transportpce-tox-verify-transportpce-master] $ /bin/sh -xe /tmp/jenkins7030086773294983882.sh 20:49:35 [EnvInject] - Injecting environment variables from a build step. 20:49:35 [EnvInject] - Injecting as environment variables the properties content 20:49:35 PARALLEL=True 20:49:35 20:49:35 [EnvInject] - Variables injected successfully. 20:49:35 [transportpce-tox-verify-transportpce-master] $ /bin/bash -l /tmp/jenkins4764951555786726546.sh 20:49:35 ---> tox-run.sh 20:49:35 + PATH=/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 20:49:35 + ARCHIVE_TOX_DIR=/w/workspace/transportpce-tox-verify-transportpce-master/archives/tox 20:49:35 + ARCHIVE_DOC_DIR=/w/workspace/transportpce-tox-verify-transportpce-master/archives/docs 20:49:35 + mkdir -p /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox 20:49:35 + cd /w/workspace/transportpce-tox-verify-transportpce-master/. 20:49:35 + source /home/jenkins/lf-env.sh 20:49:35 + lf-activate-venv --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 20:49:35 ++ mktemp -d /tmp/venv-XXXX 20:49:35 + lf_venv=/tmp/venv-pija 20:49:35 + local venv_file=/tmp/.os_lf_venv 20:49:35 + local python=python3 20:49:35 + local options 20:49:35 + local set_path=true 20:49:35 + local install_args= 20:49:35 ++ getopt -o np:v: -l no-path,system-site-packages,python:,venv-file: -n lf-activate-venv -- --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 20:49:35 + options=' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 20:49:35 + eval set -- ' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 20:49:35 ++ set -- --venv-file /tmp/.toxenv -- tox virtualenv urllib3~=1.26.15 20:49:35 + true 20:49:35 + case $1 in 20:49:35 + venv_file=/tmp/.toxenv 20:49:35 + shift 2 20:49:35 + true 20:49:35 + case $1 in 20:49:35 + shift 20:49:35 + break 20:49:35 + case $python in 20:49:35 + local pkg_list= 20:49:35 + [[ -d /opt/pyenv ]] 20:49:35 + echo 'Setup pyenv:' 20:49:35 Setup pyenv: 20:49:35 + export PYENV_ROOT=/opt/pyenv 20:49:35 + PYENV_ROOT=/opt/pyenv 20:49:35 + export PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 20:49:35 + PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 20:49:35 + pyenv versions 20:49:35 system 20:49:35 3.8.20 20:49:35 3.9.20 20:49:35 3.10.15 20:49:35 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 20:49:35 + command -v pyenv 20:49:35 ++ pyenv init - --no-rehash 20:49:35 + eval 'PATH="$(bash --norc -ec '\''IFS=:; paths=($PATH); 20:49:35 for i in ${!paths[@]}; do 20:49:35 if [[ ${paths[i]} == "'\'''\''/opt/pyenv/shims'\'''\''" ]]; then unset '\''\'\'''\''paths[i]'\''\'\'''\''; 20:49:35 fi; done; 20:49:35 echo "${paths[*]}"'\'')" 20:49:35 export PATH="/opt/pyenv/shims:${PATH}" 20:49:35 export PYENV_SHELL=bash 20:49:35 source '\''/opt/pyenv/libexec/../completions/pyenv.bash'\'' 20:49:35 pyenv() { 20:49:35 local command 20:49:35 command="${1:-}" 20:49:35 if [ "$#" -gt 0 ]; then 20:49:35 shift 20:49:35 fi 20:49:35 20:49:35 case "$command" in 20:49:35 rehash|shell) 20:49:35 eval "$(pyenv "sh-$command" "$@")" 20:49:35 ;; 20:49:35 *) 20:49:35 command pyenv "$command" "$@" 20:49:35 ;; 20:49:35 esac 20:49:35 }' 20:49:35 +++ bash --norc -ec 'IFS=:; paths=($PATH); 20:49:35 for i in ${!paths[@]}; do 20:49:35 if [[ ${paths[i]} == "/opt/pyenv/shims" ]]; then unset '\''paths[i]'\''; 20:49:35 fi; done; 20:49:35 echo "${paths[*]}"' 20:49:35 ++ PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 20:49:35 ++ export PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 20:49:35 ++ PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 20:49:35 ++ export PYENV_SHELL=bash 20:49:35 ++ PYENV_SHELL=bash 20:49:35 ++ source /opt/pyenv/libexec/../completions/pyenv.bash 20:49:35 +++ complete -F _pyenv pyenv 20:49:35 ++ lf-pyver python3 20:49:35 ++ local py_version_xy=python3 20:49:35 ++ local py_version_xyz= 20:49:35 ++ pyenv versions 20:49:35 ++ local command 20:49:35 ++ sed 's/^[ *]* //' 20:49:35 ++ command=versions 20:49:35 ++ '[' 1 -gt 0 ']' 20:49:35 ++ shift 20:49:35 ++ case "$command" in 20:49:35 ++ command pyenv versions 20:49:35 ++ awk '{ print $1 }' 20:49:35 ++ grep -E '^[0-9.]*[0-9]$' 20:49:35 ++ [[ ! -s /tmp/.pyenv_versions ]] 20:49:35 +++ grep '^3' /tmp/.pyenv_versions 20:49:35 +++ sort -V 20:49:35 +++ tail -n 1 20:49:35 ++ py_version_xyz=3.11.10 20:49:35 ++ [[ -z 3.11.10 ]] 20:49:35 ++ echo 3.11.10 20:49:35 ++ return 0 20:49:35 + pyenv local 3.11.10 20:49:35 + local command 20:49:35 + command=local 20:49:35 + '[' 2 -gt 0 ']' 20:49:35 + shift 20:49:35 + case "$command" in 20:49:35 + command pyenv local 3.11.10 20:49:35 + for arg in "$@" 20:49:35 + case $arg in 20:49:35 + pkg_list+='tox ' 20:49:35 + for arg in "$@" 20:49:35 + case $arg in 20:49:35 + pkg_list+='virtualenv ' 20:49:35 + for arg in "$@" 20:49:35 + case $arg in 20:49:35 + pkg_list+='urllib3~=1.26.15 ' 20:49:35 + [[ -f /tmp/.toxenv ]] 20:49:35 ++ cat /tmp/.toxenv 20:49:35 + lf_venv=/tmp/venv-k2SX 20:49:35 + echo 'lf-activate-venv(): INFO: Reuse venv:/tmp/venv-k2SX from' file:/tmp/.toxenv 20:49:35 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-k2SX from file:/tmp/.toxenv 20:49:35 + echo 'lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv)' 20:49:35 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 20:49:35 + local 'pip_opts=--upgrade --quiet' 20:49:35 + pip_opts='--upgrade --quiet --trusted-host pypi.org' 20:49:35 + pip_opts='--upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org' 20:49:35 + pip_opts='--upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org --trusted-host pypi.python.org' 20:49:35 + [[ -n '' ]] 20:49:35 + [[ -n '' ]] 20:49:35 + echo 'lf-activate-venv(): INFO: Attempting to install with network-safe options...' 20:49:35 lf-activate-venv(): INFO: Attempting to install with network-safe options... 20:49:35 + /tmp/venv-k2SX/bin/python3 -m pip install --upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org --trusted-host pypi.python.org pip 'setuptools<66' virtualenv 20:49:36 + echo 'lf-activate-venv(): INFO: Base packages installed successfully' 20:49:36 lf-activate-venv(): INFO: Base packages installed successfully 20:49:36 + [[ -z tox virtualenv urllib3~=1.26.15 ]] 20:49:36 + echo 'lf-activate-venv(): INFO: Installing additional packages: tox virtualenv urllib3~=1.26.15 ' 20:49:36 lf-activate-venv(): INFO: Installing additional packages: tox virtualenv urllib3~=1.26.15 20:49:36 + /tmp/venv-k2SX/bin/python3 -m pip install --upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org --trusted-host pypi.python.org --upgrade-strategy eager tox virtualenv urllib3~=1.26.15 20:49:37 + type python3 20:49:37 + true 20:49:37 + echo 'lf-activate-venv(): INFO: Adding /tmp/venv-k2SX/bin to PATH' 20:49:37 lf-activate-venv(): INFO: Adding /tmp/venv-k2SX/bin to PATH 20:49:37 + PATH=/tmp/venv-k2SX/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 20:49:37 + return 0 20:49:37 + [[ -d /opt/pyenv ]] 20:49:37 + echo '---> Setting up pyenv' 20:49:37 ---> Setting up pyenv 20:49:37 + export PYENV_ROOT=/opt/pyenv 20:49:37 + PYENV_ROOT=/opt/pyenv 20:49:37 + export PATH=/opt/pyenv/bin:/tmp/venv-k2SX/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 20:49:37 + PATH=/opt/pyenv/bin:/tmp/venv-k2SX/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 20:49:37 ++ pwd 20:49:37 + PYTHONPATH=/w/workspace/transportpce-tox-verify-transportpce-master 20:49:37 + export PYTHONPATH 20:49:37 + export TOX_TESTENV_PASSENV=PYTHONPATH 20:49:37 + TOX_TESTENV_PASSENV=PYTHONPATH 20:49:37 + tox --version 20:49:37 4.47.3 from /tmp/venv-k2SX/lib/python3.11/site-packages/tox/__init__.py 20:49:37 + PARALLEL=True 20:49:37 + TOX_OPTIONS_LIST= 20:49:37 + [[ -n '' ]] 20:49:37 + case ${PARALLEL,,} in 20:49:37 + TOX_OPTIONS_LIST=' --parallel auto --parallel-live' 20:49:37 + tox --parallel auto --parallel-live 20:49:37 + tee -a /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tox.log 20:49:39 checkbashisms: freeze> python -m pip freeze --all 20:49:39 docs: install_deps> python -I -m pip install -r docs/requirements.txt 20:49:39 docs-linkcheck: install_deps> python -I -m pip install -r docs/requirements.txt 20:49:39 buildcontroller: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 20:49:40 checkbashisms: pip==26.0.1,setuptools==82.0.0 20:49:40 checkbashisms: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./fixCIcentOS8reposMirrors.sh 20:49:40 checkbashisms: commands[1] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sh -c 'command checkbashisms>/dev/null || sudo yum install -y devscripts-checkbashisms || sudo yum install -y devscripts-minimal || sudo yum install -y devscripts || sudo yum install -y https://archives.fedoraproject.org/pub/archive/fedora/linux/releases/31/Everything/x86_64/os/Packages/d/devscripts-checkbashisms-2.19.6-2.fc31.x86_64.rpm || (echo "checkbashisms command not found - please install it (e.g. sudo apt-get install devscripts | yum install devscripts-minimal )" >&2 && exit 1)' 20:49:40 checkbashisms: commands[2] /w/workspace/transportpce-tox-verify-transportpce-master/tests> find . -not -path '*/\.*' -name '*.sh' -exec checkbashisms -f '{}' + 20:49:41 checkbashisms: OK ✔ in 3.34 seconds 20:49:41 pre-commit: install_deps> python -I -m pip install pre-commit 20:49:44 pre-commit: freeze> python -m pip freeze --all 20:49:44 pre-commit: cfgv==3.5.0,distlib==0.4.0,filelock==3.25.0,identify==2.6.17,nodeenv==1.10.0,pip==26.0.1,platformdirs==4.9.4,pre_commit==4.5.1,python-discovery==1.1.0,PyYAML==6.0.3,setuptools==82.0.0,virtualenv==21.1.0 20:49:44 pre-commit: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./fixCIcentOS8reposMirrors.sh 20:49:44 pre-commit: commands[1] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sh -c 'which cpan || sudo yum install -y perl-CPAN || (echo "cpan command not found - please install it (e.g. sudo apt-get install perl-modules | yum install perl-CPAN )" >&2 && exit 1)' 20:49:44 /usr/bin/cpan 20:49:44 pre-commit: commands[2] /w/workspace/transportpce-tox-verify-transportpce-master/tests> pre-commit run --all-files --show-diff-on-failure 20:49:45 [WARNING] hook id `remove-tabs` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 20:49:45 [WARNING] hook id `perltidy` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 20:49:45 [INFO] Initializing environment for https://github.com/pre-commit/pre-commit-hooks. 20:49:45 [WARNING] repo `https://github.com/pre-commit/pre-commit-hooks` uses deprecated stage names (commit, push) which will be removed in a future version. Hint: often `pre-commit autoupdate --repo https://github.com/pre-commit/pre-commit-hooks` will fix this. if it does not -- consider reporting an issue to that repo. 20:49:45 [INFO] Initializing environment for https://github.com/jorisroovers/gitlint. 20:49:45 [INFO] Initializing environment for https://github.com/jorisroovers/gitlint:./gitlint-core[trusted-deps]. 20:49:46 [INFO] Initializing environment for https://github.com/Lucas-C/pre-commit-hooks. 20:49:46 [INFO] Initializing environment for https://github.com/pre-commit/mirrors-autopep8. 20:49:46 [INFO] Initializing environment for https://github.com/perltidy/perltidy. 20:49:46 buildcontroller: freeze> python -m pip freeze --all 20:49:47 buildcontroller: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 20:49:47 buildcontroller: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_controller.sh 20:49:47 + update-java-alternatives -l 20:49:47 java-1.11.0-openjdk-amd64 1111 /usr/lib/jvm/java-1.11.0-openjdk-amd64 20:49:47 java-1.17.0-openjdk-amd64 1711 /usr/lib/jvm/java-1.17.0-openjdk-amd64 20:49:47 java-1.21.0-openjdk-amd64 2111 /usr/lib/jvm/java-1.21.0-openjdk-amd64 20:49:47 + sudo update-java-alternatives -s java-1.21.0-openjdk-amd64 20:49:47 update-alternatives: error: no alternatives for jaotc 20:49:47 update-alternatives: error: no alternatives for rmic 20:49:47 [INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks. 20:49:47 [INFO] Once installed this environment will be reused. 20:49:47 [INFO] This may take a few minutes... 20:49:47 + java -version 20:49:47 + sed -n ;s/.* version "\(.*\)\.\(.*\)\..*".*$/\1/p; 20:49:47 + JAVA_VER=21 20:49:47 + echo 21 20:49:47 21 20:49:47 + sed -n ;s/javac \(.*\)\.\(.*\)\..*.*$/\1/p; 20:49:47 + javac -version 20:49:47 + JAVAC_VER=21 20:49:47 + echo 21 20:49:47 + [ 21 -ge 21 ] 20:49:47 21 20:49:47 ok, java is 21 or newer 20:49:48 + [ 21 -ge 21 ] 20:49:48 + echo ok, java is 21 or newer 20:49:48 + wget -nv https://dlcdn.apache.org/maven/maven-3/3.9.12/binaries/apache-maven-3.9.12-bin.tar.gz -P /tmp 20:49:48 2026-03-05 20:49:48 URL:https://dlcdn.apache.org/maven/maven-3/3.9.12/binaries/apache-maven-3.9.12-bin.tar.gz [9233336/9233336] -> "/tmp/apache-maven-3.9.12-bin.tar.gz" [1] 20:49:48 + sudo mkdir -p /opt 20:49:48 + sudo tar xf /tmp/apache-maven-3.9.12-bin.tar.gz -C /opt 20:49:49 + sudo ln -s /opt/apache-maven-3.9.12 /opt/maven 20:49:49 + sudo ln -s /opt/maven/bin/mvn /usr/bin/mvn 20:49:49 + mvn --version 20:49:49 Apache Maven 3.9.12 (848fbb4bf2d427b72bdb2471c22fced7ebd9a7a1) 20:49:49 Maven home: /opt/maven 20:49:49 Java version: 21.0.10, vendor: Ubuntu, runtime: /usr/lib/jvm/java-21-openjdk-amd64 20:49:49 Default locale: en, platform encoding: UTF-8 20:49:49 OS name: "linux", version: "5.15.0-171-generic", arch: "amd64", family: "unix" 20:49:49 NOTE: Picked up JDK_JAVA_OPTIONS: 20:49:49 --add-opens=java.base/java.io=ALL-UNNAMED 20:49:49 --add-opens=java.base/java.lang=ALL-UNNAMED 20:49:49 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 20:49:49 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 20:49:49 --add-opens=java.base/java.net=ALL-UNNAMED 20:49:49 --add-opens=java.base/java.nio=ALL-UNNAMED 20:49:49 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 20:49:49 --add-opens=java.base/java.nio.file=ALL-UNNAMED 20:49:49 --add-opens=java.base/java.util=ALL-UNNAMED 20:49:49 --add-opens=java.base/java.util.jar=ALL-UNNAMED 20:49:49 --add-opens=java.base/java.util.stream=ALL-UNNAMED 20:49:49 --add-opens=java.base/java.util.zip=ALL-UNNAMED 20:49:49 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 20:49:49 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 20:49:49 -Xlog:disable 20:49:52 [INFO] Installing environment for https://github.com/Lucas-C/pre-commit-hooks. 20:49:52 [INFO] Once installed this environment will be reused. 20:49:52 [INFO] This may take a few minutes... 20:49:59 [INFO] Installing environment for https://github.com/pre-commit/mirrors-autopep8. 20:49:59 [INFO] Once installed this environment will be reused. 20:49:59 [INFO] This may take a few minutes... 20:50:04 [INFO] Installing environment for https://github.com/perltidy/perltidy. 20:50:04 [INFO] Once installed this environment will be reused. 20:50:04 [INFO] This may take a few minutes... 20:50:06 docs: freeze> python -m pip freeze --all 20:50:06 docs: alabaster==1.0.0,attrs==25.4.0,babel==2.18.0,blockdiag==3.0.0,certifi==2026.2.25,charset-normalizer==3.4.4,contourpy==1.3.3,cycler==0.12.1,docutils==0.21.2,fonttools==4.61.1,funcparserlib==2.0.0a0,future==1.0.0,idna==3.11,imagesize==2.0.0,Jinja2==3.1.6,jsonschema==3.2.0,kiwisolver==1.4.9,lfdocs_conf==0.10.0,MarkupSafe==3.0.3,matplotlib==3.10.8,numpy==2.4.2,nwdiag==3.0.0,packaging==26.0,pillow==12.1.1,pip==26.0.1,Pygments==2.19.2,pyparsing==3.3.2,pyrsistent==0.20.0,python-dateutil==2.9.0.post0,PyYAML==6.0.3,requests==2.32.5,requests-file==1.5.1,roman-numerals==4.1.0,roman-numerals-py==4.1.0,seqdiag==3.0.0,setuptools==82.0.0,six==1.17.0,snowballstemmer==3.0.1,Sphinx==8.2.3,sphinx-bootstrap-theme==0.8.1,sphinx-data-viewer==0.1.5,sphinx-tabs==3.5.0,sphinx_rtd_theme==3.1.0,sphinxcontrib-applehelp==2.0.0,sphinxcontrib-blockdiag==3.0.0,sphinxcontrib-devhelp==2.0.0,sphinxcontrib-htmlhelp==2.1.0,sphinxcontrib-jquery==4.1,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-needs==0.7.9,sphinxcontrib-nwdiag==2.0.0,sphinxcontrib-plantuml==0.31,sphinxcontrib-qthelp==2.0.0,sphinxcontrib-seqdiag==3.0.0,sphinxcontrib-serializinghtml==2.0.0,sphinxcontrib-swaggerdoc==0.1.7,urllib3==2.6.3,webcolors==25.10.0 20:50:06 docs: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sphinx-build -q -W --keep-going -b html -n -d /w/workspace/transportpce-tox-verify-transportpce-master/.tox/docs/tmp/doctrees ../docs/ /w/workspace/transportpce-tox-verify-transportpce-master/docs/_build/html 20:50:07 docs-linkcheck: freeze> python -m pip freeze --all 20:50:08 docs-linkcheck: alabaster==1.0.0,attrs==25.4.0,babel==2.18.0,blockdiag==3.0.0,certifi==2026.2.25,charset-normalizer==3.4.4,contourpy==1.3.3,cycler==0.12.1,docutils==0.21.2,fonttools==4.61.1,funcparserlib==2.0.0a0,future==1.0.0,idna==3.11,imagesize==2.0.0,Jinja2==3.1.6,jsonschema==3.2.0,kiwisolver==1.4.9,lfdocs_conf==0.10.0,MarkupSafe==3.0.3,matplotlib==3.10.8,numpy==2.4.2,nwdiag==3.0.0,packaging==26.0,pillow==12.1.1,pip==26.0.1,Pygments==2.19.2,pyparsing==3.3.2,pyrsistent==0.20.0,python-dateutil==2.9.0.post0,PyYAML==6.0.3,requests==2.32.5,requests-file==1.5.1,roman-numerals==4.1.0,roman-numerals-py==4.1.0,seqdiag==3.0.0,setuptools==82.0.0,six==1.17.0,snowballstemmer==3.0.1,Sphinx==8.2.3,sphinx-bootstrap-theme==0.8.1,sphinx-data-viewer==0.1.5,sphinx-tabs==3.5.0,sphinx_rtd_theme==3.1.0,sphinxcontrib-applehelp==2.0.0,sphinxcontrib-blockdiag==3.0.0,sphinxcontrib-devhelp==2.0.0,sphinxcontrib-htmlhelp==2.1.0,sphinxcontrib-jquery==4.1,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-needs==0.7.9,sphinxcontrib-nwdiag==2.0.0,sphinxcontrib-plantuml==0.31,sphinxcontrib-qthelp==2.0.0,sphinxcontrib-seqdiag==3.0.0,sphinxcontrib-serializinghtml==2.0.0,sphinxcontrib-swaggerdoc==0.1.7,urllib3==2.6.3,webcolors==25.10.0 20:50:08 docs-linkcheck: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sphinx-build -q -b linkcheck -d /w/workspace/transportpce-tox-verify-transportpce-master/.tox/docs-linkcheck/tmp/doctrees ../docs/ /w/workspace/transportpce-tox-verify-transportpce-master/docs/_build/linkcheck 20:50:11 docs: OK ✔ in 33.42 seconds 20:50:11 pylint: install_deps> python -I -m pip install 'pylint>=2.6.0' 20:50:17 docs-linkcheck: OK ✔ in 35.56 seconds 20:50:17 pylint: freeze> python -m pip freeze --all 20:50:17 trim trailing whitespace.................................................pylint: astroid==4.0.4,dill==0.4.1,isort==8.0.1,mccabe==0.7.0,pip==26.0.1,platformdirs==4.9.4,pylint==4.0.5,setuptools==82.0.0,tomlkit==0.14.0 20:50:17 pylint: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> find transportpce_tests/ -name '*.py' -exec pylint --fail-under=10 --max-line-length=120 --disable=missing-docstring,import-error --disable=fixme --disable=duplicate-code '--module-rgx=([a-z0-9_]+$)|([0-9.]{1,30}$)' '--method-rgx=(([a-z_][a-zA-Z0-9_]{2,})|(_[a-z0-9_]*)|(__[a-zA-Z][a-zA-Z0-9_]+__))$' '--variable-rgx=[a-zA-Z_][a-zA-Z0-9_]{1,30}$' '{}' + 20:50:17 Passed 20:50:17 Tabs remover.............................................................Passed 20:50:18 autopep8.................................................................Passed 20:50:24 perltidy.................................................................Passed 20:50:25 pre-commit: commands[3] /w/workspace/transportpce-tox-verify-transportpce-master/tests> pre-commit run gitlint-ci --hook-stage manual 20:50:25 [WARNING] hook id `remove-tabs` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 20:50:25 [WARNING] hook id `perltidy` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 20:50:25 [INFO] Installing environment for https://github.com/jorisroovers/gitlint. 20:50:25 [INFO] Once installed this environment will be reused. 20:50:25 [INFO] This may take a few minutes... 20:50:33 gitlint..................................................................Passed 20:50:43 20:50:43 ------------------------------------ 20:50:43 Your code has been rated at 10.00/10 20:50:43 20:51:32 pre-commit: OK ✔ in 52.39 seconds 20:51:32 pylint: OK ✔ in 34.77 seconds 20:51:32 buildcontroller: OK ✔ in 1 minute 53.57 seconds 20:51:32 build_karaf_tests71: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 20:51:32 build_karaf_tests221: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 20:51:33 build_karaf_tests200: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 20:51:33 build_karaf_tests121: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 20:51:39 build_karaf_tests221: freeze> python -m pip freeze --all 20:51:39 build_karaf_tests71: freeze> python -m pip freeze --all 20:51:39 build_karaf_tests121: freeze> python -m pip freeze --all 20:51:39 build_karaf_tests200: freeze> python -m pip freeze --all 20:51:39 build_karaf_tests221: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 20:51:39 build_karaf_tests221: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 20:51:39 build karaf in karaf221 with ./karaf221.env 20:51:40 build_karaf_tests71: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 20:51:40 build_karaf_tests71: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 20:51:40 build_karaf_tests121: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 20:51:40 build_karaf_tests121: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 20:51:40 build karaf in karaf121 with ./karaf121.env 20:51:40 build karaf in karaf71 with ./karaf71.env 20:51:40 build_karaf_tests200: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 20:51:40 build_karaf_tests200: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 20:51:40 build karaf in karafoc200 with ./karafoc200.env 20:51:40 NOTE: Picked up JDK_JAVA_OPTIONS: 20:51:40 --add-opens=java.base/java.io=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.lang=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.net=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.nio=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.nio.file=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.util=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.util.jar=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.util.stream=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.util.zip=ALL-UNNAMED 20:51:40 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 20:51:40 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 20:51:40 -Xlog:disable 20:51:40 NOTE: Picked up JDK_JAVA_OPTIONS: 20:51:40 --add-opens=java.base/java.io=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.lang=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.net=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.nio=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.nio.file=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.util=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.util.jar=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.util.stream=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.util.zip=ALL-UNNAMED 20:51:40 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 20:51:40 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 20:51:40 -Xlog:disable 20:51:40 NOTE: Picked up JDK_JAVA_OPTIONS: 20:51:40 --add-opens=java.base/java.io=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.lang=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.net=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.nio=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.nio.file=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.util=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.util.jar=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.util.stream=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.util.zip=ALL-UNNAMED 20:51:40 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 20:51:40 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 20:51:40 -Xlog:disable 20:51:40 NOTE: Picked up JDK_JAVA_OPTIONS: 20:51:40 --add-opens=java.base/java.io=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.lang=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.net=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.nio=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.nio.file=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.util=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.util.jar=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.util.stream=ALL-UNNAMED 20:51:40 --add-opens=java.base/java.util.zip=ALL-UNNAMED 20:51:40 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 20:51:40 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 20:51:40 -Xlog:disable 20:52:46 build_karaf_tests71: OK ✔ in 1 minute 14.13 seconds 20:52:46 build_karaf_tests221: OK ✔ in 1 minute 14.15 seconds 20:52:46 build_karaf_tests200: OK ✔ in 1 minute 14.16 seconds 20:52:46 buildlighty: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 20:52:46 sims: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 20:52:47 build_karaf_tests121: OK ✔ in 1 minute 15.79 seconds 20:52:47 testsPCE: install_deps> python -I -m pip install gnpy4tpce==2.4.7 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 20:53:01 buildlighty: freeze> python -m pip freeze --all 20:53:01 buildlighty: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 20:53:01 buildlighty: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/lighty> ./build.sh 20:53:01 sims: freeze> python -m pip freeze --all 20:53:01 NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED 20:53:02 sims: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 20:53:02 sims: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./install_lightynode.sh 20:53:02 Using lighynode version 22.1.0.7 20:53:02 Installing lightynode device to ./lightynode/lightynode-openroadm-device directory 20:53:07 sims: OK ✔ in 21.22 seconds 20:53:07 tests71: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 20:53:14 tests71: freeze> python -m pip freeze --all 20:53:15 tests71: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 20:53:15 tests71: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 7.1 20:53:15 using environment variables from ./karaf71.env 20:53:15 pytest -q transportpce_tests/7.1/test01_portmapping.py 20:53:53 buildlighty: OK ✔ in 47.56 seconds 20:53:53 testsPCE: freeze> python -m pip freeze --all 20:53:54 .testsPCE: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,click==8.3.1,contourpy==1.3.3,cryptography==3.3.2,cycler==0.12.1,dict2xml==1.7.8,Flask==2.1.3,Flask-Injector==0.14.0,fonttools==4.61.1,gnpy4tpce==2.4.7,idna==3.11,iniconfig==2.3.0,injector==0.24.0,invoke==2.2.1,itsdangerous==2.2.0,Jinja2==3.1.6,kiwisolver==1.4.9,lxml==6.0.2,MarkupSafe==3.0.3,matplotlib==3.10.8,netconf-client==3.5.0,networkx==2.8.8,numpy==1.26.4,packaging==26.0,pandas==1.5.3,paramiko==4.0.0,pbr==5.11.1,pillow==12.1.1,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pyparsing==3.3.2,pytest==9.0.2,python-dateutil==2.9.0.post0,pytz==2026.1.post1,requests==2.32.5,scipy==1.17.1,setuptools==50.3.2,six==1.17.0,urllib3==2.6.3,Werkzeug==2.0.3,xlrd==1.2.0 20:53:54 testsPCE: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh pce 20:53:54 pytest -q transportpce_tests/pce/test01_pce.py 20:53:55 ........... [100%] 20:54:07 12 passed in 52.19s 20:54:07 pytest -q transportpce_tests/7.1/test02_otn_renderer.py 20:54:41 ....................................................... [100%] 20:55:54 20 passed in 120.26s (0:02:00) 20:55:54 pytest -q transportpce_tests/pce/test02_pce_400G.py 20:55:56 .................................. [100%] 20:56:41 12 passed in 46.10s 20:56:41 pytest -q transportpce_tests/pce/test03_gnpy.py 20:56:42 ..... [100%] 20:56:52 62 passed in 164.92s (0:02:44) 20:56:52 pytest -q transportpce_tests/7.1/test03_renderer_or_modes.py 20:56:56 F...FFF. [100%] 20:57:12 =================================== FAILURES =================================== 20:57:12 _________________ TestTransportGnpy.test_00_load_port_mapping __________________ 20:57:12 20:57:12 self = 20:57:12 20:57:12 def test_00_load_port_mapping(self): 20:57:12 response = test_utils.post_portmapping(self.port_mapping_data) 20:57:12 > self.assertIn(response['status_code'], (requests.codes.created, requests.codes.no_content)) 20:57:12 E AssertionError: 404 not found in (201, 204) 20:57:12 20:57:12 transportpce_tests/pce/test03_gnpy.py:119: AssertionError 20:57:12 ---------------------------- Captured stdout setup ----------------------------- 20:57:12 sample files content loaded 20:57:12 starting GNPy REST server... 20:57:12 starting OpenDaylight... 20:57:12 starting KARAF (karaf) TransportPCE build... 20:57:12 Searching for patterns in karaf.log... Pattern found! OpenDaylight started ! 20:57:12 __________ TestTransportGnpy.test_04_path_computation_FeasibleWithPCE __________ 20:57:12 20:57:12 self = 20:57:12 20:57:12 def test_04_path_computation_FeasibleWithPCE(self): 20:57:12 response = test_utils.transportpce_api_rpc_request('transportpce-pce', 20:57:12 'path-computation-request', 20:57:12 self.path_computation_input_data) 20:57:12 self.assertEqual(response['status_code'], requests.codes.ok) 20:57:12 > self.assertEqual(response['output']['configuration-response-common']['response-code'], '200') 20:57:12 E AssertionError: '500' != '200' 20:57:12 E - 500 20:57:12 E ? ^ 20:57:12 E + 200 20:57:12 E ? ^ 20:57:12 20:57:12 transportpce_tests/pce/test03_gnpy.py:144: AssertionError 20:57:12 ___ TestTransportGnpy.test_05_path_computation_FoundByPCE_NotFeasibleByGnpy ____ 20:57:12 20:57:12 self = 20:57:12 20:57:12 def test_05_path_computation_FoundByPCE_NotFeasibleByGnpy(self): 20:57:12 self.path_computation_input_data["service-name"] = "service-2" 20:57:12 self.path_computation_input_data["service-handler-header"]["request-id"] = "request-2" 20:57:12 self.path_computation_input_data["hard-constraints"] =\ 20:57:12 {"include": {"node-id": ["OpenROADM-2", "OpenROADM-3", "OpenROADM-4"]}} 20:57:12 response = test_utils.transportpce_api_rpc_request('transportpce-pce', 20:57:12 'path-computation-request', 20:57:12 self.path_computation_input_data) 20:57:12 self.assertEqual(response['status_code'], requests.codes.ok) 20:57:12 self.assertEqual(response['output']['configuration-response-common'][ 20:57:12 'response-code'], '500') 20:57:12 self.assertEqual(response['output']['configuration-response-common'][ 20:57:12 'response-message'], 20:57:12 'No path available by PCE and GNPy ') 20:57:12 self.assertIn('A-to-Z', 20:57:12 > [response['output']['gnpy-response'][0]['path-dir'], 20:57:12 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 20:57:12 response['output']['gnpy-response'][1]['path-dir']]) 20:57:12 E KeyError: 'gnpy-response' 20:57:12 20:57:12 transportpce_tests/pce/test03_gnpy.py:174: KeyError 20:57:12 ______ TestTransportGnpy.test_06_path_computation_FoundByPCE_FoundByGNPy _______ 20:57:12 20:57:12 self = 20:57:12 20:57:12 def test_06_path_computation_FoundByPCE_FoundByGNPy(self): 20:57:12 self.path_computation_input_data["service-name"] = "service-3" 20:57:12 self.path_computation_input_data["service-handler-header"]["request-id"] = "request-3" 20:57:12 self.path_computation_input_data["service-z-end"]["node-id"] = "XPONDER-4" 20:57:12 self.path_computation_input_data["hard-constraints"] =\ 20:57:12 {"include": {"node-id": ["OpenROADM-2", "OpenROADM-3"]}} 20:57:12 response = test_utils.transportpce_api_rpc_request('transportpce-pce', 20:57:12 'path-computation-request', 20:57:12 self.path_computation_input_data) 20:57:12 self.assertEqual(response['status_code'], requests.codes.ok) 20:57:12 > self.assertEqual(response['output']['configuration-response-common'][ 20:57:12 'response-code'], '200') 20:57:12 E AssertionError: '500' != '200' 20:57:12 E - 500 20:57:12 E ? ^ 20:57:12 E + 200 20:57:12 E ? ^ 20:57:12 20:57:12 transportpce_tests/pce/test03_gnpy.py:196: AssertionError 20:57:12 =========================== short test summary info ============================ 20:57:12 FAILED transportpce_tests/pce/test03_gnpy.py::TestTransportGnpy::test_00_load_port_mapping 20:57:12 FAILED transportpce_tests/pce/test03_gnpy.py::TestTransportGnpy::test_04_path_computation_FeasibleWithPCE 20:57:12 FAILED transportpce_tests/pce/test03_gnpy.py::TestTransportGnpy::test_05_path_computation_FoundByPCE_NotFeasibleByGnpy 20:57:12 FAILED transportpce_tests/pce/test03_gnpy.py::TestTransportGnpy::test_06_path_computation_FoundByPCE_FoundByGNPy 20:57:12 4 failed, 4 passed in 30.96s 20:57:12 testsPCE: exit 1 (198.25 seconds) /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh pce pid=5087 20:57:13 testsPCE: FAIL ✖ in 4 minutes 25.14 seconds 20:57:13 tests200: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 20:57:13 tests_tapi: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 20:57:13 tests121: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 20:57:21 tests_tapi: freeze> python -m pip freeze --all 20:57:21 tests200: freeze> python -m pip freeze --all 20:57:21 tests_tapi: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 20:57:21 tests_tapi: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh tapi 20:57:21 using environment variables from ./karaf221.env 20:57:21 pytest -q transportpce_tests/tapi/test01_abstracted_topology.py 20:57:21 tests121: freeze> python -m pip freeze --all 20:57:21 tests200: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 20:57:21 tests200: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh oc200 20:57:21 using environment variables from ./karafoc200.env 20:57:21 pytest -q transportpce_tests/oc200/test01_portmapping.py 20:57:21 tests121: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 20:57:21 tests121: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 1.2.1 20:57:21 using environment variables from ./karaf121.env 20:57:22 pytest -q transportpce_tests/1.2.1/test01_portmapping.py 20:57:35 .......................................F.... [100%] 20:58:53 =================================== FAILURES =================================== 20:58:53 ___________ TestTransportPCEPortmapping.test_08_mpdr_switching_pool ____________ 20:58:53 20:58:53 self = 20:58:53 20:58:53 def test_08_mpdr_switching_pool(self): 20:58:53 response = test_utils.get_portmapping_node_attr("XPDR-OC", "switching-pool-lcp", "2") 20:58:53 > self.assertEqual(response['status_code'], requests.codes.ok) 20:58:53 E AssertionError: 409 != 200 20:58:53 20:58:53 transportpce_tests/oc200/test01_portmapping.py:127: AssertionError 20:58:53 ----------------------------- Captured stdout call ----------------------------- 20:58:53 execution of test_08_mpdr_switching_pool 20:58:53 =========================== short test summary info ============================ 20:58:53 FAILED transportpce_tests/oc200/test01_portmapping.py::TestTransportPCEPortmapping::test_08_mpdr_switching_pool 20:58:53 1 failed, 9 passed in 91.32s (0:01:31) 20:58:53 tests200: exit 1 (91.77 seconds) /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh oc200 pid=8463 20:58:54 ..................... [100%] 20:59:24 48 passed in 151.62s (0:02:31) 20:59:25 pytest -q transportpce_tests/7.1/test04_renderer_regen_mode.py 20:59:27 .................................................... [100%] 21:00:48 22 passed in 82.47s (0:01:22) 21:00:57 ...........FFFFFFFFFFFFFFFFFFFF [100%] 21:02:04 =================================== FAILURES =================================== 21:02:04 ___________ TestTransportPCEPortmapping.test_02_rdm_device_connected ___________ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 > sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 21:02:04 raise err 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 address = ('localhost', 8191), timeout = 30, source_address = None 21:02:04 socket_options = [(6, 1, 1)] 21:02:04 21:02:04 def create_connection( 21:02:04 address: tuple[str, int], 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 source_address: tuple[str, int] | None = None, 21:02:04 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 21:02:04 ) -> socket.socket: 21:02:04 """Connect to *address* and return the socket object. 21:02:04 21:02:04 Convenience function. Connect to *address* (a 2-tuple ``(host, 21:02:04 port)``) and return the socket object. Passing the optional 21:02:04 *timeout* parameter will set the timeout on the socket instance 21:02:04 before attempting to connect. If no *timeout* is supplied, the 21:02:04 global default timeout setting returned by :func:`socket.getdefaulttimeout` 21:02:04 is used. If *source_address* is set it must be a tuple of (host, port) 21:02:04 for the socket to bind as a source address before making the connection. 21:02:04 An host of '' or port 0 tells the OS to use the default. 21:02:04 """ 21:02:04 21:02:04 host, port = address 21:02:04 if host.startswith("["): 21:02:04 host = host.strip("[]") 21:02:04 err = None 21:02:04 21:02:04 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 21:02:04 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 21:02:04 # The original create_connection function always returns all records. 21:02:04 family = allowed_gai_family() 21:02:04 21:02:04 try: 21:02:04 host.encode("idna") 21:02:04 except UnicodeError: 21:02:04 raise LocationParseError(f"'{host}', label empty or too long") from None 21:02:04 21:02:04 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 21:02:04 af, socktype, proto, canonname, sa = res 21:02:04 sock = None 21:02:04 try: 21:02:04 sock = socket.socket(af, socktype, proto) 21:02:04 21:02:04 # If provided, set socket level options before connecting. 21:02:04 _set_socket_options(sock, socket_options) 21:02:04 21:02:04 if timeout is not _DEFAULT_TIMEOUT: 21:02:04 sock.settimeout(timeout) 21:02:04 if source_address: 21:02:04 sock.bind(source_address) 21:02:04 > sock.connect(sa) 21:02:04 E ConnectionRefusedError: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig' 21:02:04 body = None 21:02:04 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 21:02:04 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 redirect = False, assert_same_host = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 21:02:04 release_conn = False, chunked = False, body_pos = None, preload_content = False 21:02:04 decode_content = False, response_kw = {} 21:02:04 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01', query='content=nonconfig', fragment=None) 21:02:04 destination_scheme = None, conn = None, release_this_conn = True 21:02:04 http_tunnel_required = False, err = None, clean_exit = False 21:02:04 21:02:04 def urlopen( # type: ignore[override] 21:02:04 self, 21:02:04 method: str, 21:02:04 url: str, 21:02:04 body: _TYPE_BODY | None = None, 21:02:04 headers: typing.Mapping[str, str] | None = None, 21:02:04 retries: Retry | bool | int | None = None, 21:02:04 redirect: bool = True, 21:02:04 assert_same_host: bool = True, 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 pool_timeout: int | None = None, 21:02:04 release_conn: bool | None = None, 21:02:04 chunked: bool = False, 21:02:04 body_pos: _TYPE_BODY_POSITION | None = None, 21:02:04 preload_content: bool = True, 21:02:04 decode_content: bool = True, 21:02:04 **response_kw: typing.Any, 21:02:04 ) -> BaseHTTPResponse: 21:02:04 """ 21:02:04 Get a connection from the pool and perform an HTTP request. This is the 21:02:04 lowest level call for making a request, so you'll need to specify all 21:02:04 the raw details. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 More commonly, it's appropriate to use a convenience method 21:02:04 such as :meth:`request`. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 `release_conn` will only behave as expected if 21:02:04 `preload_content=False` because we want to make 21:02:04 `preload_content=False` the default behaviour someday soon without 21:02:04 breaking backwards compatibility. 21:02:04 21:02:04 :param method: 21:02:04 HTTP request method (such as GET, POST, PUT, etc.) 21:02:04 21:02:04 :param url: 21:02:04 The URL to perform the request on. 21:02:04 21:02:04 :param body: 21:02:04 Data to send in the request body, either :class:`str`, :class:`bytes`, 21:02:04 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 21:02:04 21:02:04 :param headers: 21:02:04 Dictionary of custom headers to send, such as User-Agent, 21:02:04 If-None-Match, etc. If None, pool headers are used. If provided, 21:02:04 these headers completely replace any pool-specific headers. 21:02:04 21:02:04 :param retries: 21:02:04 Configure the number of retries to allow before raising a 21:02:04 :class:`~urllib3.exceptions.MaxRetryError` exception. 21:02:04 21:02:04 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 21:02:04 :class:`~urllib3.util.retry.Retry` object for fine-grained control 21:02:04 over different types of retries. 21:02:04 Pass an integer number to retry connection errors that many times, 21:02:04 but no other types of errors. Pass zero to never retry. 21:02:04 21:02:04 If ``False``, then retries are disabled and any exception is raised 21:02:04 immediately. Also, instead of raising a MaxRetryError on redirects, 21:02:04 the redirect response will be returned. 21:02:04 21:02:04 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 21:02:04 21:02:04 :param redirect: 21:02:04 If True, automatically handle redirects (status codes 301, 302, 21:02:04 303, 307, 308). Each redirect counts as a retry. Disabling retries 21:02:04 will disable redirect, too. 21:02:04 21:02:04 :param assert_same_host: 21:02:04 If ``True``, will make sure that the host of the pool requests is 21:02:04 consistent else will raise HostChangedError. When ``False``, you can 21:02:04 use the pool on an HTTP proxy and request foreign hosts. 21:02:04 21:02:04 :param timeout: 21:02:04 If specified, overrides the default timeout for this one 21:02:04 request. It may be a float (in seconds) or an instance of 21:02:04 :class:`urllib3.util.Timeout`. 21:02:04 21:02:04 :param pool_timeout: 21:02:04 If set and the pool is set to block=True, then this method will 21:02:04 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 21:02:04 connection is available within the time period. 21:02:04 21:02:04 :param bool preload_content: 21:02:04 If True, the response's body will be preloaded into memory. 21:02:04 21:02:04 :param bool decode_content: 21:02:04 If True, will attempt to decode the body based on the 21:02:04 'content-encoding' header. 21:02:04 21:02:04 :param release_conn: 21:02:04 If False, then the urlopen call will not release the connection 21:02:04 back into the pool once a response is received (but will release if 21:02:04 you read the entire contents of the response such as when 21:02:04 `preload_content=True`). This is useful if you're not preloading 21:02:04 the response's content immediately. You will need to call 21:02:04 ``r.release_conn()`` on the response ``r`` to return the connection 21:02:04 back into the pool. If None, it takes the value of ``preload_content`` 21:02:04 which defaults to ``True``. 21:02:04 21:02:04 :param bool chunked: 21:02:04 If True, urllib3 will send the body using chunked transfer 21:02:04 encoding. Otherwise, urllib3 will send the body using the standard 21:02:04 content-length form. Defaults to False. 21:02:04 21:02:04 :param int body_pos: 21:02:04 Position to seek to in file-like body in the event of a retry or 21:02:04 redirect. Typically this won't need to be set because urllib3 will 21:02:04 auto-populate the value when needed. 21:02:04 """ 21:02:04 parsed_url = parse_url(url) 21:02:04 destination_scheme = parsed_url.scheme 21:02:04 21:02:04 if headers is None: 21:02:04 headers = self.headers 21:02:04 21:02:04 if not isinstance(retries, Retry): 21:02:04 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 21:02:04 21:02:04 if release_conn is None: 21:02:04 release_conn = preload_content 21:02:04 21:02:04 # Check host 21:02:04 if assert_same_host and not self.is_same_host(url): 21:02:04 raise HostChangedError(self, url, retries) 21:02:04 21:02:04 # Ensure that the URL we're connecting to is properly encoded 21:02:04 if url.startswith("/"): 21:02:04 url = to_str(_encode_target(url)) 21:02:04 else: 21:02:04 url = to_str(parsed_url.url) 21:02:04 21:02:04 conn = None 21:02:04 21:02:04 # Track whether `conn` needs to be released before 21:02:04 # returning/raising/recursing. Update this variable if necessary, and 21:02:04 # leave `release_conn` constant throughout the function. That way, if 21:02:04 # the function recurses, the original value of `release_conn` will be 21:02:04 # passed down into the recursive call, and its value will be respected. 21:02:04 # 21:02:04 # See issue #651 [1] for details. 21:02:04 # 21:02:04 # [1] 21:02:04 release_this_conn = release_conn 21:02:04 21:02:04 http_tunnel_required = connection_requires_http_tunnel( 21:02:04 self.proxy, self.proxy_config, destination_scheme 21:02:04 ) 21:02:04 21:02:04 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 21:02:04 # have to copy the headers dict so we can safely change it without those 21:02:04 # changes being reflected in anyone else's copy. 21:02:04 if not http_tunnel_required: 21:02:04 headers = headers.copy() # type: ignore[attr-defined] 21:02:04 headers.update(self.proxy_headers) # type: ignore[union-attr] 21:02:04 21:02:04 # Must keep the exception bound to a separate variable or else Python 3 21:02:04 # complains about UnboundLocalError. 21:02:04 err = None 21:02:04 21:02:04 # Keep track of whether we cleanly exited the except block. This 21:02:04 # ensures we do proper cleanup in finally. 21:02:04 clean_exit = False 21:02:04 21:02:04 # Rewind body position, if needed. Record current position 21:02:04 # for future rewinds in the event of a redirect/retry. 21:02:04 body_pos = set_file_position(body, body_pos) 21:02:04 21:02:04 try: 21:02:04 # Request a connection from the queue. 21:02:04 timeout_obj = self._get_timeout(timeout) 21:02:04 conn = self._get_conn(timeout=pool_timeout) 21:02:04 21:02:04 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 21:02:04 21:02:04 # Is this a closed/new connection that requires CONNECT tunnelling? 21:02:04 if self.proxy is not None and http_tunnel_required and conn.is_closed: 21:02:04 try: 21:02:04 self._prepare_proxy(conn) 21:02:04 except (BaseSSLError, OSError, SocketTimeout) as e: 21:02:04 self._raise_timeout( 21:02:04 err=e, url=self.proxy.url, timeout_value=conn.timeout 21:02:04 ) 21:02:04 raise 21:02:04 21:02:04 # If we're going to release the connection in ``finally:``, then 21:02:04 # the response doesn't need to know about the connection. Otherwise 21:02:04 # it will also try to release it and we'll have a double-release 21:02:04 # mess. 21:02:04 response_conn = conn if not release_conn else None 21:02:04 21:02:04 # Make the request on the HTTPConnection object 21:02:04 > response = self._make_request( 21:02:04 conn, 21:02:04 method, 21:02:04 url, 21:02:04 timeout=timeout_obj, 21:02:04 body=body, 21:02:04 headers=headers, 21:02:04 chunked=chunked, 21:02:04 retries=retries, 21:02:04 response_conn=response_conn, 21:02:04 preload_content=preload_content, 21:02:04 decode_content=decode_content, 21:02:04 **response_kw, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 21:02:04 conn.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 21:02:04 self.endheaders() 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 21:02:04 self._send_output(message_body, encode_chunked=encode_chunked) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 21:02:04 self.send(msg) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 21:02:04 self.connect() 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 21:02:04 self.sock = self._new_conn() 21:02:04 ^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 except socket.gaierror as e: 21:02:04 raise NameResolutionError(self.host, self, e) from e 21:02:04 except SocketTimeout as e: 21:02:04 raise ConnectTimeoutError( 21:02:04 self, 21:02:04 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 21:02:04 ) from e 21:02:04 21:02:04 except OSError as e: 21:02:04 > raise NewConnectionError( 21:02:04 self, f"Failed to establish a new connection: {e}" 21:02:04 ) from e 21:02:04 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 > resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 21:02:04 retries = retries.increment( 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig' 21:02:04 response = None 21:02:04 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 21:02:04 _pool = 21:02:04 _stacktrace = 21:02:04 21:02:04 def increment( 21:02:04 self, 21:02:04 method: str | None = None, 21:02:04 url: str | None = None, 21:02:04 response: BaseHTTPResponse | None = None, 21:02:04 error: Exception | None = None, 21:02:04 _pool: ConnectionPool | None = None, 21:02:04 _stacktrace: TracebackType | None = None, 21:02:04 ) -> Self: 21:02:04 """Return a new Retry object with incremented retry counters. 21:02:04 21:02:04 :param response: A response object, or None, if the server did not 21:02:04 return a response. 21:02:04 :type response: :class:`~urllib3.response.BaseHTTPResponse` 21:02:04 :param Exception error: An error encountered during the request, or 21:02:04 None if the response was received successfully. 21:02:04 21:02:04 :return: A new ``Retry`` object. 21:02:04 """ 21:02:04 if self.total is False and error: 21:02:04 # Disabled, indicate to re-raise the error. 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 21:02:04 total = self.total 21:02:04 if total is not None: 21:02:04 total -= 1 21:02:04 21:02:04 connect = self.connect 21:02:04 read = self.read 21:02:04 redirect = self.redirect 21:02:04 status_count = self.status 21:02:04 other = self.other 21:02:04 cause = "unknown" 21:02:04 status = None 21:02:04 redirect_location = None 21:02:04 21:02:04 if error and self._is_connection_error(error): 21:02:04 # Connect retry? 21:02:04 if connect is False: 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif connect is not None: 21:02:04 connect -= 1 21:02:04 21:02:04 elif error and self._is_read_error(error): 21:02:04 # Read retry? 21:02:04 if read is False or method is None or not self._is_method_retryable(method): 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif read is not None: 21:02:04 read -= 1 21:02:04 21:02:04 elif error: 21:02:04 # Other retry? 21:02:04 if other is not None: 21:02:04 other -= 1 21:02:04 21:02:04 elif response and response.get_redirect_location(): 21:02:04 # Redirect retry? 21:02:04 if redirect is not None: 21:02:04 redirect -= 1 21:02:04 cause = "too many redirects" 21:02:04 response_redirect_location = response.get_redirect_location() 21:02:04 if response_redirect_location: 21:02:04 redirect_location = response_redirect_location 21:02:04 status = response.status 21:02:04 21:02:04 else: 21:02:04 # Incrementing because of a server error like a 500 in 21:02:04 # status_forcelist and the given method is in the allowed_methods 21:02:04 cause = ResponseError.GENERIC_ERROR 21:02:04 if response and response.status: 21:02:04 if status_count is not None: 21:02:04 status_count -= 1 21:02:04 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 21:02:04 status = response.status 21:02:04 21:02:04 history = self.history + ( 21:02:04 RequestHistory(method, url, error, status, redirect_location), 21:02:04 ) 21:02:04 21:02:04 new_retry = self.new( 21:02:04 total=total, 21:02:04 connect=connect, 21:02:04 read=read, 21:02:04 redirect=redirect, 21:02:04 status=status_count, 21:02:04 other=other, 21:02:04 history=history, 21:02:04 ) 21:02:04 21:02:04 if new_retry.is_exhausted(): 21:02:04 reason = error or ResponseError(cause) 21:02:04 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 21:02:04 21:02:04 During handling of the above exception, another exception occurred: 21:02:04 21:02:04 self = 21:02:04 21:02:04 def test_02_rdm_device_connected(self): 21:02:04 > response = test_utils.check_device_connection("ROADMA01") 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 21:02:04 transportpce_tests/1.2.1/test01_portmapping.py:54: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 transportpce_tests/common/test_utils.py:409: in check_device_connection 21:02:04 response = get_request(url[RESTCONF_VERSION].format('{}', node)) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 transportpce_tests/common/test_utils.py:117: in get_request 21:02:04 return requests.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 21:02:04 return session.request(method=method, url=url, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 21:02:04 resp = self.send(prep, **send_kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 21:02:04 r = adapter.send(request, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 except (ProtocolError, OSError) as err: 21:02:04 raise ConnectionError(err, request=request) 21:02:04 21:02:04 except MaxRetryError as e: 21:02:04 if isinstance(e.reason, ConnectTimeoutError): 21:02:04 # TODO: Remove this in 3.0.0: see #2811 21:02:04 if not isinstance(e.reason, NewConnectionError): 21:02:04 raise ConnectTimeout(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, ResponseError): 21:02:04 raise RetryError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _ProxyError): 21:02:04 raise ProxyError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _SSLError): 21:02:04 # This branch is for urllib3 v1.22 and later. 21:02:04 raise SSLError(e, request=request) 21:02:04 21:02:04 > raise ConnectionError(e, request=request) 21:02:04 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 21:02:04 ----------------------------- Captured stdout call ----------------------------- 21:02:04 execution of test_02_rdm_device_connected 21:02:04 ___________ TestTransportPCEPortmapping.test_03_rdm_portmapping_info ___________ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 > sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 21:02:04 raise err 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 address = ('localhost', 8191), timeout = 30, source_address = None 21:02:04 socket_options = [(6, 1, 1)] 21:02:04 21:02:04 def create_connection( 21:02:04 address: tuple[str, int], 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 source_address: tuple[str, int] | None = None, 21:02:04 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 21:02:04 ) -> socket.socket: 21:02:04 """Connect to *address* and return the socket object. 21:02:04 21:02:04 Convenience function. Connect to *address* (a 2-tuple ``(host, 21:02:04 port)``) and return the socket object. Passing the optional 21:02:04 *timeout* parameter will set the timeout on the socket instance 21:02:04 before attempting to connect. If no *timeout* is supplied, the 21:02:04 global default timeout setting returned by :func:`socket.getdefaulttimeout` 21:02:04 is used. If *source_address* is set it must be a tuple of (host, port) 21:02:04 for the socket to bind as a source address before making the connection. 21:02:04 An host of '' or port 0 tells the OS to use the default. 21:02:04 """ 21:02:04 21:02:04 host, port = address 21:02:04 if host.startswith("["): 21:02:04 host = host.strip("[]") 21:02:04 err = None 21:02:04 21:02:04 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 21:02:04 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 21:02:04 # The original create_connection function always returns all records. 21:02:04 family = allowed_gai_family() 21:02:04 21:02:04 try: 21:02:04 host.encode("idna") 21:02:04 except UnicodeError: 21:02:04 raise LocationParseError(f"'{host}', label empty or too long") from None 21:02:04 21:02:04 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 21:02:04 af, socktype, proto, canonname, sa = res 21:02:04 sock = None 21:02:04 try: 21:02:04 sock = socket.socket(af, socktype, proto) 21:02:04 21:02:04 # If provided, set socket level options before connecting. 21:02:04 _set_socket_options(sock, socket_options) 21:02:04 21:02:04 if timeout is not _DEFAULT_TIMEOUT: 21:02:04 sock.settimeout(timeout) 21:02:04 if source_address: 21:02:04 sock.bind(source_address) 21:02:04 > sock.connect(sa) 21:02:04 E ConnectionRefusedError: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info' 21:02:04 body = None 21:02:04 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 21:02:04 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 redirect = False, assert_same_host = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 21:02:04 release_conn = False, chunked = False, body_pos = None, preload_content = False 21:02:04 decode_content = False, response_kw = {} 21:02:04 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info', query=None, fragment=None) 21:02:04 destination_scheme = None, conn = None, release_this_conn = True 21:02:04 http_tunnel_required = False, err = None, clean_exit = False 21:02:04 21:02:04 def urlopen( # type: ignore[override] 21:02:04 self, 21:02:04 method: str, 21:02:04 url: str, 21:02:04 body: _TYPE_BODY | None = None, 21:02:04 headers: typing.Mapping[str, str] | None = None, 21:02:04 retries: Retry | bool | int | None = None, 21:02:04 redirect: bool = True, 21:02:04 assert_same_host: bool = True, 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 pool_timeout: int | None = None, 21:02:04 release_conn: bool | None = None, 21:02:04 chunked: bool = False, 21:02:04 body_pos: _TYPE_BODY_POSITION | None = None, 21:02:04 preload_content: bool = True, 21:02:04 decode_content: bool = True, 21:02:04 **response_kw: typing.Any, 21:02:04 ) -> BaseHTTPResponse: 21:02:04 """ 21:02:04 Get a connection from the pool and perform an HTTP request. This is the 21:02:04 lowest level call for making a request, so you'll need to specify all 21:02:04 the raw details. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 More commonly, it's appropriate to use a convenience method 21:02:04 such as :meth:`request`. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 `release_conn` will only behave as expected if 21:02:04 `preload_content=False` because we want to make 21:02:04 `preload_content=False` the default behaviour someday soon without 21:02:04 breaking backwards compatibility. 21:02:04 21:02:04 :param method: 21:02:04 HTTP request method (such as GET, POST, PUT, etc.) 21:02:04 21:02:04 :param url: 21:02:04 The URL to perform the request on. 21:02:04 21:02:04 :param body: 21:02:04 Data to send in the request body, either :class:`str`, :class:`bytes`, 21:02:04 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 21:02:04 21:02:04 :param headers: 21:02:04 Dictionary of custom headers to send, such as User-Agent, 21:02:04 If-None-Match, etc. If None, pool headers are used. If provided, 21:02:04 these headers completely replace any pool-specific headers. 21:02:04 21:02:04 :param retries: 21:02:04 Configure the number of retries to allow before raising a 21:02:04 :class:`~urllib3.exceptions.MaxRetryError` exception. 21:02:04 21:02:04 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 21:02:04 :class:`~urllib3.util.retry.Retry` object for fine-grained control 21:02:04 over different types of retries. 21:02:04 Pass an integer number to retry connection errors that many times, 21:02:04 but no other types of errors. Pass zero to never retry. 21:02:04 21:02:04 If ``False``, then retries are disabled and any exception is raised 21:02:04 immediately. Also, instead of raising a MaxRetryError on redirects, 21:02:04 the redirect response will be returned. 21:02:04 21:02:04 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 21:02:04 21:02:04 :param redirect: 21:02:04 If True, automatically handle redirects (status codes 301, 302, 21:02:04 303, 307, 308). Each redirect counts as a retry. Disabling retries 21:02:04 will disable redirect, too. 21:02:04 21:02:04 :param assert_same_host: 21:02:04 If ``True``, will make sure that the host of the pool requests is 21:02:04 consistent else will raise HostChangedError. When ``False``, you can 21:02:04 use the pool on an HTTP proxy and request foreign hosts. 21:02:04 21:02:04 :param timeout: 21:02:04 If specified, overrides the default timeout for this one 21:02:04 request. It may be a float (in seconds) or an instance of 21:02:04 :class:`urllib3.util.Timeout`. 21:02:04 21:02:04 :param pool_timeout: 21:02:04 If set and the pool is set to block=True, then this method will 21:02:04 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 21:02:04 connection is available within the time period. 21:02:04 21:02:04 :param bool preload_content: 21:02:04 If True, the response's body will be preloaded into memory. 21:02:04 21:02:04 :param bool decode_content: 21:02:04 If True, will attempt to decode the body based on the 21:02:04 'content-encoding' header. 21:02:04 21:02:04 :param release_conn: 21:02:04 If False, then the urlopen call will not release the connection 21:02:04 back into the pool once a response is received (but will release if 21:02:04 you read the entire contents of the response such as when 21:02:04 `preload_content=True`). This is useful if you're not preloading 21:02:04 the response's content immediately. You will need to call 21:02:04 ``r.release_conn()`` on the response ``r`` to return the connection 21:02:04 back into the pool. If None, it takes the value of ``preload_content`` 21:02:04 which defaults to ``True``. 21:02:04 21:02:04 :param bool chunked: 21:02:04 If True, urllib3 will send the body using chunked transfer 21:02:04 encoding. Otherwise, urllib3 will send the body using the standard 21:02:04 content-length form. Defaults to False. 21:02:04 21:02:04 :param int body_pos: 21:02:04 Position to seek to in file-like body in the event of a retry or 21:02:04 redirect. Typically this won't need to be set because urllib3 will 21:02:04 auto-populate the value when needed. 21:02:04 """ 21:02:04 parsed_url = parse_url(url) 21:02:04 destination_scheme = parsed_url.scheme 21:02:04 21:02:04 if headers is None: 21:02:04 headers = self.headers 21:02:04 21:02:04 if not isinstance(retries, Retry): 21:02:04 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 21:02:04 21:02:04 if release_conn is None: 21:02:04 release_conn = preload_content 21:02:04 21:02:04 # Check host 21:02:04 if assert_same_host and not self.is_same_host(url): 21:02:04 raise HostChangedError(self, url, retries) 21:02:04 21:02:04 # Ensure that the URL we're connecting to is properly encoded 21:02:04 if url.startswith("/"): 21:02:04 url = to_str(_encode_target(url)) 21:02:04 else: 21:02:04 url = to_str(parsed_url.url) 21:02:04 21:02:04 conn = None 21:02:04 21:02:04 # Track whether `conn` needs to be released before 21:02:04 # returning/raising/recursing. Update this variable if necessary, and 21:02:04 # leave `release_conn` constant throughout the function. That way, if 21:02:04 # the function recurses, the original value of `release_conn` will be 21:02:04 # passed down into the recursive call, and its value will be respected. 21:02:04 # 21:02:04 # See issue #651 [1] for details. 21:02:04 # 21:02:04 # [1] 21:02:04 release_this_conn = release_conn 21:02:04 21:02:04 http_tunnel_required = connection_requires_http_tunnel( 21:02:04 self.proxy, self.proxy_config, destination_scheme 21:02:04 ) 21:02:04 21:02:04 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 21:02:04 # have to copy the headers dict so we can safely change it without those 21:02:04 # changes being reflected in anyone else's copy. 21:02:04 if not http_tunnel_required: 21:02:04 headers = headers.copy() # type: ignore[attr-defined] 21:02:04 headers.update(self.proxy_headers) # type: ignore[union-attr] 21:02:04 21:02:04 # Must keep the exception bound to a separate variable or else Python 3 21:02:04 # complains about UnboundLocalError. 21:02:04 err = None 21:02:04 21:02:04 # Keep track of whether we cleanly exited the except block. This 21:02:04 # ensures we do proper cleanup in finally. 21:02:04 clean_exit = False 21:02:04 21:02:04 # Rewind body position, if needed. Record current position 21:02:04 # for future rewinds in the event of a redirect/retry. 21:02:04 body_pos = set_file_position(body, body_pos) 21:02:04 21:02:04 try: 21:02:04 # Request a connection from the queue. 21:02:04 timeout_obj = self._get_timeout(timeout) 21:02:04 conn = self._get_conn(timeout=pool_timeout) 21:02:04 21:02:04 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 21:02:04 21:02:04 # Is this a closed/new connection that requires CONNECT tunnelling? 21:02:04 if self.proxy is not None and http_tunnel_required and conn.is_closed: 21:02:04 try: 21:02:04 self._prepare_proxy(conn) 21:02:04 except (BaseSSLError, OSError, SocketTimeout) as e: 21:02:04 self._raise_timeout( 21:02:04 err=e, url=self.proxy.url, timeout_value=conn.timeout 21:02:04 ) 21:02:04 raise 21:02:04 21:02:04 # If we're going to release the connection in ``finally:``, then 21:02:04 # the response doesn't need to know about the connection. Otherwise 21:02:04 # it will also try to release it and we'll have a double-release 21:02:04 # mess. 21:02:04 response_conn = conn if not release_conn else None 21:02:04 21:02:04 # Make the request on the HTTPConnection object 21:02:04 > response = self._make_request( 21:02:04 conn, 21:02:04 method, 21:02:04 url, 21:02:04 timeout=timeout_obj, 21:02:04 body=body, 21:02:04 headers=headers, 21:02:04 chunked=chunked, 21:02:04 retries=retries, 21:02:04 response_conn=response_conn, 21:02:04 preload_content=preload_content, 21:02:04 decode_content=decode_content, 21:02:04 **response_kw, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 21:02:04 conn.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 21:02:04 self.endheaders() 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 21:02:04 self._send_output(message_body, encode_chunked=encode_chunked) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 21:02:04 self.send(msg) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 21:02:04 self.connect() 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 21:02:04 self.sock = self._new_conn() 21:02:04 ^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 except socket.gaierror as e: 21:02:04 raise NameResolutionError(self.host, self, e) from e 21:02:04 except SocketTimeout as e: 21:02:04 raise ConnectTimeoutError( 21:02:04 self, 21:02:04 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 21:02:04 ) from e 21:02:04 21:02:04 except OSError as e: 21:02:04 > raise NewConnectionError( 21:02:04 self, f"Failed to establish a new connection: {e}" 21:02:04 ) from e 21:02:04 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 > resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 21:02:04 retries = retries.increment( 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info' 21:02:04 response = None 21:02:04 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 21:02:04 _pool = 21:02:04 _stacktrace = 21:02:04 21:02:04 def increment( 21:02:04 self, 21:02:04 method: str | None = None, 21:02:04 url: str | None = None, 21:02:04 response: BaseHTTPResponse | None = None, 21:02:04 error: Exception | None = None, 21:02:04 _pool: ConnectionPool | None = None, 21:02:04 _stacktrace: TracebackType | None = None, 21:02:04 ) -> Self: 21:02:04 """Return a new Retry object with incremented retry counters. 21:02:04 21:02:04 :param response: A response object, or None, if the server did not 21:02:04 return a response. 21:02:04 :type response: :class:`~urllib3.response.BaseHTTPResponse` 21:02:04 :param Exception error: An error encountered during the request, or 21:02:04 None if the response was received successfully. 21:02:04 21:02:04 :return: A new ``Retry`` object. 21:02:04 """ 21:02:04 if self.total is False and error: 21:02:04 # Disabled, indicate to re-raise the error. 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 21:02:04 total = self.total 21:02:04 if total is not None: 21:02:04 total -= 1 21:02:04 21:02:04 connect = self.connect 21:02:04 read = self.read 21:02:04 redirect = self.redirect 21:02:04 status_count = self.status 21:02:04 other = self.other 21:02:04 cause = "unknown" 21:02:04 status = None 21:02:04 redirect_location = None 21:02:04 21:02:04 if error and self._is_connection_error(error): 21:02:04 # Connect retry? 21:02:04 if connect is False: 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif connect is not None: 21:02:04 connect -= 1 21:02:04 21:02:04 elif error and self._is_read_error(error): 21:02:04 # Read retry? 21:02:04 if read is False or method is None or not self._is_method_retryable(method): 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif read is not None: 21:02:04 read -= 1 21:02:04 21:02:04 elif error: 21:02:04 # Other retry? 21:02:04 if other is not None: 21:02:04 other -= 1 21:02:04 21:02:04 elif response and response.get_redirect_location(): 21:02:04 # Redirect retry? 21:02:04 if redirect is not None: 21:02:04 redirect -= 1 21:02:04 cause = "too many redirects" 21:02:04 response_redirect_location = response.get_redirect_location() 21:02:04 if response_redirect_location: 21:02:04 redirect_location = response_redirect_location 21:02:04 status = response.status 21:02:04 21:02:04 else: 21:02:04 # Incrementing because of a server error like a 500 in 21:02:04 # status_forcelist and the given method is in the allowed_methods 21:02:04 cause = ResponseError.GENERIC_ERROR 21:02:04 if response and response.status: 21:02:04 if status_count is not None: 21:02:04 status_count -= 1 21:02:04 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 21:02:04 status = response.status 21:02:04 21:02:04 history = self.history + ( 21:02:04 RequestHistory(method, url, error, status, redirect_location), 21:02:04 ) 21:02:04 21:02:04 new_retry = self.new( 21:02:04 total=total, 21:02:04 connect=connect, 21:02:04 read=read, 21:02:04 redirect=redirect, 21:02:04 status=status_count, 21:02:04 other=other, 21:02:04 history=history, 21:02:04 ) 21:02:04 21:02:04 if new_retry.is_exhausted(): 21:02:04 reason = error or ResponseError(cause) 21:02:04 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 21:02:04 21:02:04 During handling of the above exception, another exception occurred: 21:02:04 21:02:04 self = 21:02:04 21:02:04 def test_03_rdm_portmapping_info(self): 21:02:04 > response = test_utils.get_portmapping_node_attr("ROADMA01", "node-info", None) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 21:02:04 transportpce_tests/1.2.1/test01_portmapping.py:60: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 21:02:04 response = get_request(target_url) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 transportpce_tests/common/test_utils.py:117: in get_request 21:02:04 return requests.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 21:02:04 return session.request(method=method, url=url, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 21:02:04 resp = self.send(prep, **send_kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 21:02:04 r = adapter.send(request, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 except (ProtocolError, OSError) as err: 21:02:04 raise ConnectionError(err, request=request) 21:02:04 21:02:04 except MaxRetryError as e: 21:02:04 if isinstance(e.reason, ConnectTimeoutError): 21:02:04 # TODO: Remove this in 3.0.0: see #2811 21:02:04 if not isinstance(e.reason, NewConnectionError): 21:02:04 raise ConnectTimeout(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, ResponseError): 21:02:04 raise RetryError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _ProxyError): 21:02:04 raise ProxyError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _SSLError): 21:02:04 # This branch is for urllib3 v1.22 and later. 21:02:04 raise SSLError(e, request=request) 21:02:04 21:02:04 > raise ConnectionError(e, request=request) 21:02:04 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 21:02:04 ----------------------------- Captured stdout call ----------------------------- 21:02:04 execution of test_03_rdm_portmapping_info 21:02:04 ______ TestTransportPCEPortmapping.test_04_rdm_portmapping_DEG1_TTP_TXRX _______ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 > sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 21:02:04 raise err 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 address = ('localhost', 8191), timeout = 30, source_address = None 21:02:04 socket_options = [(6, 1, 1)] 21:02:04 21:02:04 def create_connection( 21:02:04 address: tuple[str, int], 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 source_address: tuple[str, int] | None = None, 21:02:04 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 21:02:04 ) -> socket.socket: 21:02:04 """Connect to *address* and return the socket object. 21:02:04 21:02:04 Convenience function. Connect to *address* (a 2-tuple ``(host, 21:02:04 port)``) and return the socket object. Passing the optional 21:02:04 *timeout* parameter will set the timeout on the socket instance 21:02:04 before attempting to connect. If no *timeout* is supplied, the 21:02:04 global default timeout setting returned by :func:`socket.getdefaulttimeout` 21:02:04 is used. If *source_address* is set it must be a tuple of (host, port) 21:02:04 for the socket to bind as a source address before making the connection. 21:02:04 An host of '' or port 0 tells the OS to use the default. 21:02:04 """ 21:02:04 21:02:04 host, port = address 21:02:04 if host.startswith("["): 21:02:04 host = host.strip("[]") 21:02:04 err = None 21:02:04 21:02:04 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 21:02:04 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 21:02:04 # The original create_connection function always returns all records. 21:02:04 family = allowed_gai_family() 21:02:04 21:02:04 try: 21:02:04 host.encode("idna") 21:02:04 except UnicodeError: 21:02:04 raise LocationParseError(f"'{host}', label empty or too long") from None 21:02:04 21:02:04 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 21:02:04 af, socktype, proto, canonname, sa = res 21:02:04 sock = None 21:02:04 try: 21:02:04 sock = socket.socket(af, socktype, proto) 21:02:04 21:02:04 # If provided, set socket level options before connecting. 21:02:04 _set_socket_options(sock, socket_options) 21:02:04 21:02:04 if timeout is not _DEFAULT_TIMEOUT: 21:02:04 sock.settimeout(timeout) 21:02:04 if source_address: 21:02:04 sock.bind(source_address) 21:02:04 > sock.connect(sa) 21:02:04 E ConnectionRefusedError: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=DEG1-TTP-TXRX' 21:02:04 body = None 21:02:04 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 21:02:04 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 redirect = False, assert_same_host = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 21:02:04 release_conn = False, chunked = False, body_pos = None, preload_content = False 21:02:04 decode_content = False, response_kw = {} 21:02:04 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=DEG1-TTP-TXRX', query=None, fragment=None) 21:02:04 destination_scheme = None, conn = None, release_this_conn = True 21:02:04 http_tunnel_required = False, err = None, clean_exit = False 21:02:04 21:02:04 def urlopen( # type: ignore[override] 21:02:04 self, 21:02:04 method: str, 21:02:04 url: str, 21:02:04 body: _TYPE_BODY | None = None, 21:02:04 headers: typing.Mapping[str, str] | None = None, 21:02:04 retries: Retry | bool | int | None = None, 21:02:04 redirect: bool = True, 21:02:04 assert_same_host: bool = True, 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 pool_timeout: int | None = None, 21:02:04 release_conn: bool | None = None, 21:02:04 chunked: bool = False, 21:02:04 body_pos: _TYPE_BODY_POSITION | None = None, 21:02:04 preload_content: bool = True, 21:02:04 decode_content: bool = True, 21:02:04 **response_kw: typing.Any, 21:02:04 ) -> BaseHTTPResponse: 21:02:04 """ 21:02:04 Get a connection from the pool and perform an HTTP request. This is the 21:02:04 lowest level call for making a request, so you'll need to specify all 21:02:04 the raw details. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 More commonly, it's appropriate to use a convenience method 21:02:04 such as :meth:`request`. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 `release_conn` will only behave as expected if 21:02:04 `preload_content=False` because we want to make 21:02:04 `preload_content=False` the default behaviour someday soon without 21:02:04 breaking backwards compatibility. 21:02:04 21:02:04 :param method: 21:02:04 HTTP request method (such as GET, POST, PUT, etc.) 21:02:04 21:02:04 :param url: 21:02:04 The URL to perform the request on. 21:02:04 21:02:04 :param body: 21:02:04 Data to send in the request body, either :class:`str`, :class:`bytes`, 21:02:04 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 21:02:04 21:02:04 :param headers: 21:02:04 Dictionary of custom headers to send, such as User-Agent, 21:02:04 If-None-Match, etc. If None, pool headers are used. If provided, 21:02:04 these headers completely replace any pool-specific headers. 21:02:04 21:02:04 :param retries: 21:02:04 Configure the number of retries to allow before raising a 21:02:04 :class:`~urllib3.exceptions.MaxRetryError` exception. 21:02:04 21:02:04 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 21:02:04 :class:`~urllib3.util.retry.Retry` object for fine-grained control 21:02:04 over different types of retries. 21:02:04 Pass an integer number to retry connection errors that many times, 21:02:04 but no other types of errors. Pass zero to never retry. 21:02:04 21:02:04 If ``False``, then retries are disabled and any exception is raised 21:02:04 immediately. Also, instead of raising a MaxRetryError on redirects, 21:02:04 the redirect response will be returned. 21:02:04 21:02:04 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 21:02:04 21:02:04 :param redirect: 21:02:04 If True, automatically handle redirects (status codes 301, 302, 21:02:04 303, 307, 308). Each redirect counts as a retry. Disabling retries 21:02:04 will disable redirect, too. 21:02:04 21:02:04 :param assert_same_host: 21:02:04 If ``True``, will make sure that the host of the pool requests is 21:02:04 consistent else will raise HostChangedError. When ``False``, you can 21:02:04 use the pool on an HTTP proxy and request foreign hosts. 21:02:04 21:02:04 :param timeout: 21:02:04 If specified, overrides the default timeout for this one 21:02:04 request. It may be a float (in seconds) or an instance of 21:02:04 :class:`urllib3.util.Timeout`. 21:02:04 21:02:04 :param pool_timeout: 21:02:04 If set and the pool is set to block=True, then this method will 21:02:04 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 21:02:04 connection is available within the time period. 21:02:04 21:02:04 :param bool preload_content: 21:02:04 If True, the response's body will be preloaded into memory. 21:02:04 21:02:04 :param bool decode_content: 21:02:04 If True, will attempt to decode the body based on the 21:02:04 'content-encoding' header. 21:02:04 21:02:04 :param release_conn: 21:02:04 If False, then the urlopen call will not release the connection 21:02:04 back into the pool once a response is received (but will release if 21:02:04 you read the entire contents of the response such as when 21:02:04 `preload_content=True`). This is useful if you're not preloading 21:02:04 the response's content immediately. You will need to call 21:02:04 ``r.release_conn()`` on the response ``r`` to return the connection 21:02:04 back into the pool. If None, it takes the value of ``preload_content`` 21:02:04 which defaults to ``True``. 21:02:04 21:02:04 :param bool chunked: 21:02:04 If True, urllib3 will send the body using chunked transfer 21:02:04 encoding. Otherwise, urllib3 will send the body using the standard 21:02:04 content-length form. Defaults to False. 21:02:04 21:02:04 :param int body_pos: 21:02:04 Position to seek to in file-like body in the event of a retry or 21:02:04 redirect. Typically this won't need to be set because urllib3 will 21:02:04 auto-populate the value when needed. 21:02:04 """ 21:02:04 parsed_url = parse_url(url) 21:02:04 destination_scheme = parsed_url.scheme 21:02:04 21:02:04 if headers is None: 21:02:04 headers = self.headers 21:02:04 21:02:04 if not isinstance(retries, Retry): 21:02:04 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 21:02:04 21:02:04 if release_conn is None: 21:02:04 release_conn = preload_content 21:02:04 21:02:04 # Check host 21:02:04 if assert_same_host and not self.is_same_host(url): 21:02:04 raise HostChangedError(self, url, retries) 21:02:04 21:02:04 # Ensure that the URL we're connecting to is properly encoded 21:02:04 if url.startswith("/"): 21:02:04 url = to_str(_encode_target(url)) 21:02:04 else: 21:02:04 url = to_str(parsed_url.url) 21:02:04 21:02:04 conn = None 21:02:04 21:02:04 # Track whether `conn` needs to be released before 21:02:04 # returning/raising/recursing. Update this variable if necessary, and 21:02:04 # leave `release_conn` constant throughout the function. That way, if 21:02:04 # the function recurses, the original value of `release_conn` will be 21:02:04 # passed down into the recursive call, and its value will be respected. 21:02:04 # 21:02:04 # See issue #651 [1] for details. 21:02:04 # 21:02:04 # [1] 21:02:04 release_this_conn = release_conn 21:02:04 21:02:04 http_tunnel_required = connection_requires_http_tunnel( 21:02:04 self.proxy, self.proxy_config, destination_scheme 21:02:04 ) 21:02:04 21:02:04 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 21:02:04 # have to copy the headers dict so we can safely change it without those 21:02:04 # changes being reflected in anyone else's copy. 21:02:04 if not http_tunnel_required: 21:02:04 headers = headers.copy() # type: ignore[attr-defined] 21:02:04 headers.update(self.proxy_headers) # type: ignore[union-attr] 21:02:04 21:02:04 # Must keep the exception bound to a separate variable or else Python 3 21:02:04 # complains about UnboundLocalError. 21:02:04 err = None 21:02:04 21:02:04 # Keep track of whether we cleanly exited the except block. This 21:02:04 # ensures we do proper cleanup in finally. 21:02:04 clean_exit = False 21:02:04 21:02:04 # Rewind body position, if needed. Record current position 21:02:04 # for future rewinds in the event of a redirect/retry. 21:02:04 body_pos = set_file_position(body, body_pos) 21:02:04 21:02:04 try: 21:02:04 # Request a connection from the queue. 21:02:04 timeout_obj = self._get_timeout(timeout) 21:02:04 conn = self._get_conn(timeout=pool_timeout) 21:02:04 21:02:04 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 21:02:04 21:02:04 # Is this a closed/new connection that requires CONNECT tunnelling? 21:02:04 if self.proxy is not None and http_tunnel_required and conn.is_closed: 21:02:04 try: 21:02:04 self._prepare_proxy(conn) 21:02:04 except (BaseSSLError, OSError, SocketTimeout) as e: 21:02:04 self._raise_timeout( 21:02:04 err=e, url=self.proxy.url, timeout_value=conn.timeout 21:02:04 ) 21:02:04 raise 21:02:04 21:02:04 # If we're going to release the connection in ``finally:``, then 21:02:04 # the response doesn't need to know about the connection. Otherwise 21:02:04 # it will also try to release it and we'll have a double-release 21:02:04 # mess. 21:02:04 response_conn = conn if not release_conn else None 21:02:04 21:02:04 # Make the request on the HTTPConnection object 21:02:04 > response = self._make_request( 21:02:04 conn, 21:02:04 method, 21:02:04 url, 21:02:04 timeout=timeout_obj, 21:02:04 body=body, 21:02:04 headers=headers, 21:02:04 chunked=chunked, 21:02:04 retries=retries, 21:02:04 response_conn=response_conn, 21:02:04 preload_content=preload_content, 21:02:04 decode_content=decode_content, 21:02:04 **response_kw, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 21:02:04 conn.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 21:02:04 self.endheaders() 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 21:02:04 self._send_output(message_body, encode_chunked=encode_chunked) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 21:02:04 self.send(msg) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 21:02:04 self.connect() 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 21:02:04 self.sock = self._new_conn() 21:02:04 ^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 except socket.gaierror as e: 21:02:04 raise NameResolutionError(self.host, self, e) from e 21:02:04 except SocketTimeout as e: 21:02:04 raise ConnectTimeoutError( 21:02:04 self, 21:02:04 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 21:02:04 ) from e 21:02:04 21:02:04 except OSError as e: 21:02:04 > raise NewConnectionError( 21:02:04 self, f"Failed to establish a new connection: {e}" 21:02:04 ) from e 21:02:04 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 > resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 21:02:04 retries = retries.increment( 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=DEG1-TTP-TXRX' 21:02:04 response = None 21:02:04 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 21:02:04 _pool = 21:02:04 _stacktrace = 21:02:04 21:02:04 def increment( 21:02:04 self, 21:02:04 method: str | None = None, 21:02:04 url: str | None = None, 21:02:04 response: BaseHTTPResponse | None = None, 21:02:04 error: Exception | None = None, 21:02:04 _pool: ConnectionPool | None = None, 21:02:04 _stacktrace: TracebackType | None = None, 21:02:04 ) -> Self: 21:02:04 """Return a new Retry object with incremented retry counters. 21:02:04 21:02:04 :param response: A response object, or None, if the server did not 21:02:04 return a response. 21:02:04 :type response: :class:`~urllib3.response.BaseHTTPResponse` 21:02:04 :param Exception error: An error encountered during the request, or 21:02:04 None if the response was received successfully. 21:02:04 21:02:04 :return: A new ``Retry`` object. 21:02:04 """ 21:02:04 if self.total is False and error: 21:02:04 # Disabled, indicate to re-raise the error. 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 21:02:04 total = self.total 21:02:04 if total is not None: 21:02:04 total -= 1 21:02:04 21:02:04 connect = self.connect 21:02:04 read = self.read 21:02:04 redirect = self.redirect 21:02:04 status_count = self.status 21:02:04 other = self.other 21:02:04 cause = "unknown" 21:02:04 status = None 21:02:04 redirect_location = None 21:02:04 21:02:04 if error and self._is_connection_error(error): 21:02:04 # Connect retry? 21:02:04 if connect is False: 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif connect is not None: 21:02:04 connect -= 1 21:02:04 21:02:04 elif error and self._is_read_error(error): 21:02:04 # Read retry? 21:02:04 if read is False or method is None or not self._is_method_retryable(method): 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif read is not None: 21:02:04 read -= 1 21:02:04 21:02:04 elif error: 21:02:04 # Other retry? 21:02:04 if other is not None: 21:02:04 other -= 1 21:02:04 21:02:04 elif response and response.get_redirect_location(): 21:02:04 # Redirect retry? 21:02:04 if redirect is not None: 21:02:04 redirect -= 1 21:02:04 cause = "too many redirects" 21:02:04 response_redirect_location = response.get_redirect_location() 21:02:04 if response_redirect_location: 21:02:04 redirect_location = response_redirect_location 21:02:04 status = response.status 21:02:04 21:02:04 else: 21:02:04 # Incrementing because of a server error like a 500 in 21:02:04 # status_forcelist and the given method is in the allowed_methods 21:02:04 cause = ResponseError.GENERIC_ERROR 21:02:04 if response and response.status: 21:02:04 if status_count is not None: 21:02:04 status_count -= 1 21:02:04 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 21:02:04 status = response.status 21:02:04 21:02:04 history = self.history + ( 21:02:04 RequestHistory(method, url, error, status, redirect_location), 21:02:04 ) 21:02:04 21:02:04 new_retry = self.new( 21:02:04 total=total, 21:02:04 connect=connect, 21:02:04 read=read, 21:02:04 redirect=redirect, 21:02:04 status=status_count, 21:02:04 other=other, 21:02:04 history=history, 21:02:04 ) 21:02:04 21:02:04 if new_retry.is_exhausted(): 21:02:04 reason = error or ResponseError(cause) 21:02:04 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=DEG1-TTP-TXRX (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 21:02:04 21:02:04 During handling of the above exception, another exception occurred: 21:02:04 21:02:04 self = 21:02:04 21:02:04 def test_04_rdm_portmapping_DEG1_TTP_TXRX(self): 21:02:04 > response = test_utils.get_portmapping_node_attr("ROADMA01", "mapping", "DEG1-TTP-TXRX") 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 21:02:04 transportpce_tests/1.2.1/test01_portmapping.py:73: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 21:02:04 response = get_request(target_url) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 transportpce_tests/common/test_utils.py:117: in get_request 21:02:04 return requests.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 21:02:04 return session.request(method=method, url=url, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 21:02:04 resp = self.send(prep, **send_kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 21:02:04 r = adapter.send(request, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 except (ProtocolError, OSError) as err: 21:02:04 raise ConnectionError(err, request=request) 21:02:04 21:02:04 except MaxRetryError as e: 21:02:04 if isinstance(e.reason, ConnectTimeoutError): 21:02:04 # TODO: Remove this in 3.0.0: see #2811 21:02:04 if not isinstance(e.reason, NewConnectionError): 21:02:04 raise ConnectTimeout(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, ResponseError): 21:02:04 raise RetryError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _ProxyError): 21:02:04 raise ProxyError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _SSLError): 21:02:04 # This branch is for urllib3 v1.22 and later. 21:02:04 raise SSLError(e, request=request) 21:02:04 21:02:04 > raise ConnectionError(e, request=request) 21:02:04 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=DEG1-TTP-TXRX (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 21:02:04 ----------------------------- Captured stdout call ----------------------------- 21:02:04 execution of test_04_rdm_portmapping_DEG1_TTP_TXRX 21:02:04 ______ TestTransportPCEPortmapping.test_05_rdm_portmapping_SRG1_PP7_TXRX _______ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 > sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 21:02:04 raise err 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 address = ('localhost', 8191), timeout = 30, source_address = None 21:02:04 socket_options = [(6, 1, 1)] 21:02:04 21:02:04 def create_connection( 21:02:04 address: tuple[str, int], 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 source_address: tuple[str, int] | None = None, 21:02:04 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 21:02:04 ) -> socket.socket: 21:02:04 """Connect to *address* and return the socket object. 21:02:04 21:02:04 Convenience function. Connect to *address* (a 2-tuple ``(host, 21:02:04 port)``) and return the socket object. Passing the optional 21:02:04 *timeout* parameter will set the timeout on the socket instance 21:02:04 before attempting to connect. If no *timeout* is supplied, the 21:02:04 global default timeout setting returned by :func:`socket.getdefaulttimeout` 21:02:04 is used. If *source_address* is set it must be a tuple of (host, port) 21:02:04 for the socket to bind as a source address before making the connection. 21:02:04 An host of '' or port 0 tells the OS to use the default. 21:02:04 """ 21:02:04 21:02:04 host, port = address 21:02:04 if host.startswith("["): 21:02:04 host = host.strip("[]") 21:02:04 err = None 21:02:04 21:02:04 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 21:02:04 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 21:02:04 # The original create_connection function always returns all records. 21:02:04 family = allowed_gai_family() 21:02:04 21:02:04 try: 21:02:04 host.encode("idna") 21:02:04 except UnicodeError: 21:02:04 raise LocationParseError(f"'{host}', label empty or too long") from None 21:02:04 21:02:04 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 21:02:04 af, socktype, proto, canonname, sa = res 21:02:04 sock = None 21:02:04 try: 21:02:04 sock = socket.socket(af, socktype, proto) 21:02:04 21:02:04 # If provided, set socket level options before connecting. 21:02:04 _set_socket_options(sock, socket_options) 21:02:04 21:02:04 if timeout is not _DEFAULT_TIMEOUT: 21:02:04 sock.settimeout(timeout) 21:02:04 if source_address: 21:02:04 sock.bind(source_address) 21:02:04 > sock.connect(sa) 21:02:04 E ConnectionRefusedError: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG1-PP7-TXRX' 21:02:04 body = None 21:02:04 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 21:02:04 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 redirect = False, assert_same_host = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 21:02:04 release_conn = False, chunked = False, body_pos = None, preload_content = False 21:02:04 decode_content = False, response_kw = {} 21:02:04 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG1-PP7-TXRX', query=None, fragment=None) 21:02:04 destination_scheme = None, conn = None, release_this_conn = True 21:02:04 http_tunnel_required = False, err = None, clean_exit = False 21:02:04 21:02:04 def urlopen( # type: ignore[override] 21:02:04 self, 21:02:04 method: str, 21:02:04 url: str, 21:02:04 body: _TYPE_BODY | None = None, 21:02:04 headers: typing.Mapping[str, str] | None = None, 21:02:04 retries: Retry | bool | int | None = None, 21:02:04 redirect: bool = True, 21:02:04 assert_same_host: bool = True, 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 pool_timeout: int | None = None, 21:02:04 release_conn: bool | None = None, 21:02:04 chunked: bool = False, 21:02:04 body_pos: _TYPE_BODY_POSITION | None = None, 21:02:04 preload_content: bool = True, 21:02:04 decode_content: bool = True, 21:02:04 **response_kw: typing.Any, 21:02:04 ) -> BaseHTTPResponse: 21:02:04 """ 21:02:04 Get a connection from the pool and perform an HTTP request. This is the 21:02:04 lowest level call for making a request, so you'll need to specify all 21:02:04 the raw details. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 More commonly, it's appropriate to use a convenience method 21:02:04 such as :meth:`request`. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 `release_conn` will only behave as expected if 21:02:04 `preload_content=False` because we want to make 21:02:04 `preload_content=False` the default behaviour someday soon without 21:02:04 breaking backwards compatibility. 21:02:04 21:02:04 :param method: 21:02:04 HTTP request method (such as GET, POST, PUT, etc.) 21:02:04 21:02:04 :param url: 21:02:04 The URL to perform the request on. 21:02:04 21:02:04 :param body: 21:02:04 Data to send in the request body, either :class:`str`, :class:`bytes`, 21:02:04 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 21:02:04 21:02:04 :param headers: 21:02:04 Dictionary of custom headers to send, such as User-Agent, 21:02:04 If-None-Match, etc. If None, pool headers are used. If provided, 21:02:04 these headers completely replace any pool-specific headers. 21:02:04 21:02:04 :param retries: 21:02:04 Configure the number of retries to allow before raising a 21:02:04 :class:`~urllib3.exceptions.MaxRetryError` exception. 21:02:04 21:02:04 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 21:02:04 :class:`~urllib3.util.retry.Retry` object for fine-grained control 21:02:04 over different types of retries. 21:02:04 Pass an integer number to retry connection errors that many times, 21:02:04 but no other types of errors. Pass zero to never retry. 21:02:04 21:02:04 If ``False``, then retries are disabled and any exception is raised 21:02:04 immediately. Also, instead of raising a MaxRetryError on redirects, 21:02:04 the redirect response will be returned. 21:02:04 21:02:04 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 21:02:04 21:02:04 :param redirect: 21:02:04 If True, automatically handle redirects (status codes 301, 302, 21:02:04 303, 307, 308). Each redirect counts as a retry. Disabling retries 21:02:04 will disable redirect, too. 21:02:04 21:02:04 :param assert_same_host: 21:02:04 If ``True``, will make sure that the host of the pool requests is 21:02:04 consistent else will raise HostChangedError. When ``False``, you can 21:02:04 use the pool on an HTTP proxy and request foreign hosts. 21:02:04 21:02:04 :param timeout: 21:02:04 If specified, overrides the default timeout for this one 21:02:04 request. It may be a float (in seconds) or an instance of 21:02:04 :class:`urllib3.util.Timeout`. 21:02:04 21:02:04 :param pool_timeout: 21:02:04 If set and the pool is set to block=True, then this method will 21:02:04 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 21:02:04 connection is available within the time period. 21:02:04 21:02:04 :param bool preload_content: 21:02:04 If True, the response's body will be preloaded into memory. 21:02:04 21:02:04 :param bool decode_content: 21:02:04 If True, will attempt to decode the body based on the 21:02:04 'content-encoding' header. 21:02:04 21:02:04 :param release_conn: 21:02:04 If False, then the urlopen call will not release the connection 21:02:04 back into the pool once a response is received (but will release if 21:02:04 you read the entire contents of the response such as when 21:02:04 `preload_content=True`). This is useful if you're not preloading 21:02:04 the response's content immediately. You will need to call 21:02:04 ``r.release_conn()`` on the response ``r`` to return the connection 21:02:04 back into the pool. If None, it takes the value of ``preload_content`` 21:02:04 which defaults to ``True``. 21:02:04 21:02:04 :param bool chunked: 21:02:04 If True, urllib3 will send the body using chunked transfer 21:02:04 encoding. Otherwise, urllib3 will send the body using the standard 21:02:04 content-length form. Defaults to False. 21:02:04 21:02:04 :param int body_pos: 21:02:04 Position to seek to in file-like body in the event of a retry or 21:02:04 redirect. Typically this won't need to be set because urllib3 will 21:02:04 auto-populate the value when needed. 21:02:04 """ 21:02:04 parsed_url = parse_url(url) 21:02:04 destination_scheme = parsed_url.scheme 21:02:04 21:02:04 if headers is None: 21:02:04 headers = self.headers 21:02:04 21:02:04 if not isinstance(retries, Retry): 21:02:04 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 21:02:04 21:02:04 if release_conn is None: 21:02:04 release_conn = preload_content 21:02:04 21:02:04 # Check host 21:02:04 if assert_same_host and not self.is_same_host(url): 21:02:04 raise HostChangedError(self, url, retries) 21:02:04 21:02:04 # Ensure that the URL we're connecting to is properly encoded 21:02:04 if url.startswith("/"): 21:02:04 url = to_str(_encode_target(url)) 21:02:04 else: 21:02:04 url = to_str(parsed_url.url) 21:02:04 21:02:04 conn = None 21:02:04 21:02:04 # Track whether `conn` needs to be released before 21:02:04 # returning/raising/recursing. Update this variable if necessary, and 21:02:04 # leave `release_conn` constant throughout the function. That way, if 21:02:04 # the function recurses, the original value of `release_conn` will be 21:02:04 # passed down into the recursive call, and its value will be respected. 21:02:04 # 21:02:04 # See issue #651 [1] for details. 21:02:04 # 21:02:04 # [1] 21:02:04 release_this_conn = release_conn 21:02:04 21:02:04 http_tunnel_required = connection_requires_http_tunnel( 21:02:04 self.proxy, self.proxy_config, destination_scheme 21:02:04 ) 21:02:04 21:02:04 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 21:02:04 # have to copy the headers dict so we can safely change it without those 21:02:04 # changes being reflected in anyone else's copy. 21:02:04 if not http_tunnel_required: 21:02:04 headers = headers.copy() # type: ignore[attr-defined] 21:02:04 headers.update(self.proxy_headers) # type: ignore[union-attr] 21:02:04 21:02:04 # Must keep the exception bound to a separate variable or else Python 3 21:02:04 # complains about UnboundLocalError. 21:02:04 err = None 21:02:04 21:02:04 # Keep track of whether we cleanly exited the except block. This 21:02:04 # ensures we do proper cleanup in finally. 21:02:04 clean_exit = False 21:02:04 21:02:04 # Rewind body position, if needed. Record current position 21:02:04 # for future rewinds in the event of a redirect/retry. 21:02:04 body_pos = set_file_position(body, body_pos) 21:02:04 21:02:04 try: 21:02:04 # Request a connection from the queue. 21:02:04 timeout_obj = self._get_timeout(timeout) 21:02:04 conn = self._get_conn(timeout=pool_timeout) 21:02:04 21:02:04 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 21:02:04 21:02:04 # Is this a closed/new connection that requires CONNECT tunnelling? 21:02:04 if self.proxy is not None and http_tunnel_required and conn.is_closed: 21:02:04 try: 21:02:04 self._prepare_proxy(conn) 21:02:04 except (BaseSSLError, OSError, SocketTimeout) as e: 21:02:04 self._raise_timeout( 21:02:04 err=e, url=self.proxy.url, timeout_value=conn.timeout 21:02:04 ) 21:02:04 raise 21:02:04 21:02:04 # If we're going to release the connection in ``finally:``, then 21:02:04 # the response doesn't need to know about the connection. Otherwise 21:02:04 # it will also try to release it and we'll have a double-release 21:02:04 # mess. 21:02:04 response_conn = conn if not release_conn else None 21:02:04 21:02:04 # Make the request on the HTTPConnection object 21:02:04 > response = self._make_request( 21:02:04 conn, 21:02:04 method, 21:02:04 url, 21:02:04 timeout=timeout_obj, 21:02:04 body=body, 21:02:04 headers=headers, 21:02:04 chunked=chunked, 21:02:04 retries=retries, 21:02:04 response_conn=response_conn, 21:02:04 preload_content=preload_content, 21:02:04 decode_content=decode_content, 21:02:04 **response_kw, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 21:02:04 conn.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 21:02:04 self.endheaders() 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 21:02:04 self._send_output(message_body, encode_chunked=encode_chunked) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 21:02:04 self.send(msg) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 21:02:04 self.connect() 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 21:02:04 self.sock = self._new_conn() 21:02:04 ^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 except socket.gaierror as e: 21:02:04 raise NameResolutionError(self.host, self, e) from e 21:02:04 except SocketTimeout as e: 21:02:04 raise ConnectTimeoutError( 21:02:04 self, 21:02:04 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 21:02:04 ) from e 21:02:04 21:02:04 except OSError as e: 21:02:04 > raise NewConnectionError( 21:02:04 self, f"Failed to establish a new connection: {e}" 21:02:04 ) from e 21:02:04 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 > resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 21:02:04 retries = retries.increment( 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG1-PP7-TXRX' 21:02:04 response = None 21:02:04 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 21:02:04 _pool = 21:02:04 _stacktrace = 21:02:04 21:02:04 def increment( 21:02:04 self, 21:02:04 method: str | None = None, 21:02:04 url: str | None = None, 21:02:04 response: BaseHTTPResponse | None = None, 21:02:04 error: Exception | None = None, 21:02:04 _pool: ConnectionPool | None = None, 21:02:04 _stacktrace: TracebackType | None = None, 21:02:04 ) -> Self: 21:02:04 """Return a new Retry object with incremented retry counters. 21:02:04 21:02:04 :param response: A response object, or None, if the server did not 21:02:04 return a response. 21:02:04 :type response: :class:`~urllib3.response.BaseHTTPResponse` 21:02:04 :param Exception error: An error encountered during the request, or 21:02:04 None if the response was received successfully. 21:02:04 21:02:04 :return: A new ``Retry`` object. 21:02:04 """ 21:02:04 if self.total is False and error: 21:02:04 # Disabled, indicate to re-raise the error. 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 21:02:04 total = self.total 21:02:04 if total is not None: 21:02:04 total -= 1 21:02:04 21:02:04 connect = self.connect 21:02:04 read = self.read 21:02:04 redirect = self.redirect 21:02:04 status_count = self.status 21:02:04 other = self.other 21:02:04 cause = "unknown" 21:02:04 status = None 21:02:04 redirect_location = None 21:02:04 21:02:04 if error and self._is_connection_error(error): 21:02:04 # Connect retry? 21:02:04 if connect is False: 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif connect is not None: 21:02:04 connect -= 1 21:02:04 21:02:04 elif error and self._is_read_error(error): 21:02:04 # Read retry? 21:02:04 if read is False or method is None or not self._is_method_retryable(method): 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif read is not None: 21:02:04 read -= 1 21:02:04 21:02:04 elif error: 21:02:04 # Other retry? 21:02:04 if other is not None: 21:02:04 other -= 1 21:02:04 21:02:04 elif response and response.get_redirect_location(): 21:02:04 # Redirect retry? 21:02:04 if redirect is not None: 21:02:04 redirect -= 1 21:02:04 cause = "too many redirects" 21:02:04 response_redirect_location = response.get_redirect_location() 21:02:04 if response_redirect_location: 21:02:04 redirect_location = response_redirect_location 21:02:04 status = response.status 21:02:04 21:02:04 else: 21:02:04 # Incrementing because of a server error like a 500 in 21:02:04 # status_forcelist and the given method is in the allowed_methods 21:02:04 cause = ResponseError.GENERIC_ERROR 21:02:04 if response and response.status: 21:02:04 if status_count is not None: 21:02:04 status_count -= 1 21:02:04 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 21:02:04 status = response.status 21:02:04 21:02:04 history = self.history + ( 21:02:04 RequestHistory(method, url, error, status, redirect_location), 21:02:04 ) 21:02:04 21:02:04 new_retry = self.new( 21:02:04 total=total, 21:02:04 connect=connect, 21:02:04 read=read, 21:02:04 redirect=redirect, 21:02:04 status=status_count, 21:02:04 other=other, 21:02:04 history=history, 21:02:04 ) 21:02:04 21:02:04 if new_retry.is_exhausted(): 21:02:04 reason = error or ResponseError(cause) 21:02:04 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG1-PP7-TXRX (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 21:02:04 21:02:04 During handling of the above exception, another exception occurred: 21:02:04 21:02:04 self = 21:02:04 21:02:04 def test_05_rdm_portmapping_SRG1_PP7_TXRX(self): 21:02:04 > response = test_utils.get_portmapping_node_attr("ROADMA01", "mapping", "SRG1-PP7-TXRX") 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 21:02:04 transportpce_tests/1.2.1/test01_portmapping.py:82: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 21:02:04 response = get_request(target_url) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 transportpce_tests/common/test_utils.py:117: in get_request 21:02:04 return requests.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 21:02:04 return session.request(method=method, url=url, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 21:02:04 resp = self.send(prep, **send_kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 21:02:04 r = adapter.send(request, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 except (ProtocolError, OSError) as err: 21:02:04 raise ConnectionError(err, request=request) 21:02:04 21:02:04 except MaxRetryError as e: 21:02:04 if isinstance(e.reason, ConnectTimeoutError): 21:02:04 # TODO: Remove this in 3.0.0: see #2811 21:02:04 if not isinstance(e.reason, NewConnectionError): 21:02:04 raise ConnectTimeout(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, ResponseError): 21:02:04 raise RetryError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _ProxyError): 21:02:04 raise ProxyError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _SSLError): 21:02:04 # This branch is for urllib3 v1.22 and later. 21:02:04 raise SSLError(e, request=request) 21:02:04 21:02:04 > raise ConnectionError(e, request=request) 21:02:04 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG1-PP7-TXRX (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 21:02:04 ----------------------------- Captured stdout call ----------------------------- 21:02:04 execution of test_05_rdm_portmapping_SRG1_PP7_TXRX 21:02:04 ______ TestTransportPCEPortmapping.test_06_rdm_portmapping_SRG3_PP1_TXRX _______ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 > sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 21:02:04 raise err 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 address = ('localhost', 8191), timeout = 30, source_address = None 21:02:04 socket_options = [(6, 1, 1)] 21:02:04 21:02:04 def create_connection( 21:02:04 address: tuple[str, int], 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 source_address: tuple[str, int] | None = None, 21:02:04 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 21:02:04 ) -> socket.socket: 21:02:04 """Connect to *address* and return the socket object. 21:02:04 21:02:04 Convenience function. Connect to *address* (a 2-tuple ``(host, 21:02:04 port)``) and return the socket object. Passing the optional 21:02:04 *timeout* parameter will set the timeout on the socket instance 21:02:04 before attempting to connect. If no *timeout* is supplied, the 21:02:04 global default timeout setting returned by :func:`socket.getdefaulttimeout` 21:02:04 is used. If *source_address* is set it must be a tuple of (host, port) 21:02:04 for the socket to bind as a source address before making the connection. 21:02:04 An host of '' or port 0 tells the OS to use the default. 21:02:04 """ 21:02:04 21:02:04 host, port = address 21:02:04 if host.startswith("["): 21:02:04 host = host.strip("[]") 21:02:04 err = None 21:02:04 21:02:04 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 21:02:04 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 21:02:04 # The original create_connection function always returns all records. 21:02:04 family = allowed_gai_family() 21:02:04 21:02:04 try: 21:02:04 host.encode("idna") 21:02:04 except UnicodeError: 21:02:04 raise LocationParseError(f"'{host}', label empty or too long") from None 21:02:04 21:02:04 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 21:02:04 af, socktype, proto, canonname, sa = res 21:02:04 sock = None 21:02:04 try: 21:02:04 sock = socket.socket(af, socktype, proto) 21:02:04 21:02:04 # If provided, set socket level options before connecting. 21:02:04 _set_socket_options(sock, socket_options) 21:02:04 21:02:04 if timeout is not _DEFAULT_TIMEOUT: 21:02:04 sock.settimeout(timeout) 21:02:04 if source_address: 21:02:04 sock.bind(source_address) 21:02:04 > sock.connect(sa) 21:02:04 E ConnectionRefusedError: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG3-PP1-TXRX' 21:02:04 body = None 21:02:04 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 21:02:04 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 redirect = False, assert_same_host = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 21:02:04 release_conn = False, chunked = False, body_pos = None, preload_content = False 21:02:04 decode_content = False, response_kw = {} 21:02:04 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG3-PP1-TXRX', query=None, fragment=None) 21:02:04 destination_scheme = None, conn = None, release_this_conn = True 21:02:04 http_tunnel_required = False, err = None, clean_exit = False 21:02:04 21:02:04 def urlopen( # type: ignore[override] 21:02:04 self, 21:02:04 method: str, 21:02:04 url: str, 21:02:04 body: _TYPE_BODY | None = None, 21:02:04 headers: typing.Mapping[str, str] | None = None, 21:02:04 retries: Retry | bool | int | None = None, 21:02:04 redirect: bool = True, 21:02:04 assert_same_host: bool = True, 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 pool_timeout: int | None = None, 21:02:04 release_conn: bool | None = None, 21:02:04 chunked: bool = False, 21:02:04 body_pos: _TYPE_BODY_POSITION | None = None, 21:02:04 preload_content: bool = True, 21:02:04 decode_content: bool = True, 21:02:04 **response_kw: typing.Any, 21:02:04 ) -> BaseHTTPResponse: 21:02:04 """ 21:02:04 Get a connection from the pool and perform an HTTP request. This is the 21:02:04 lowest level call for making a request, so you'll need to specify all 21:02:04 the raw details. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 More commonly, it's appropriate to use a convenience method 21:02:04 such as :meth:`request`. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 `release_conn` will only behave as expected if 21:02:04 `preload_content=False` because we want to make 21:02:04 `preload_content=False` the default behaviour someday soon without 21:02:04 breaking backwards compatibility. 21:02:04 21:02:04 :param method: 21:02:04 HTTP request method (such as GET, POST, PUT, etc.) 21:02:04 21:02:04 :param url: 21:02:04 The URL to perform the request on. 21:02:04 21:02:04 :param body: 21:02:04 Data to send in the request body, either :class:`str`, :class:`bytes`, 21:02:04 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 21:02:04 21:02:04 :param headers: 21:02:04 Dictionary of custom headers to send, such as User-Agent, 21:02:04 If-None-Match, etc. If None, pool headers are used. If provided, 21:02:04 these headers completely replace any pool-specific headers. 21:02:04 21:02:04 :param retries: 21:02:04 Configure the number of retries to allow before raising a 21:02:04 :class:`~urllib3.exceptions.MaxRetryError` exception. 21:02:04 21:02:04 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 21:02:04 :class:`~urllib3.util.retry.Retry` object for fine-grained control 21:02:04 over different types of retries. 21:02:04 Pass an integer number to retry connection errors that many times, 21:02:04 but no other types of errors. Pass zero to never retry. 21:02:04 21:02:04 If ``False``, then retries are disabled and any exception is raised 21:02:04 immediately. Also, instead of raising a MaxRetryError on redirects, 21:02:04 the redirect response will be returned. 21:02:04 21:02:04 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 21:02:04 21:02:04 :param redirect: 21:02:04 If True, automatically handle redirects (status codes 301, 302, 21:02:04 303, 307, 308). Each redirect counts as a retry. Disabling retries 21:02:04 will disable redirect, too. 21:02:04 21:02:04 :param assert_same_host: 21:02:04 If ``True``, will make sure that the host of the pool requests is 21:02:04 consistent else will raise HostChangedError. When ``False``, you can 21:02:04 use the pool on an HTTP proxy and request foreign hosts. 21:02:04 21:02:04 :param timeout: 21:02:04 If specified, overrides the default timeout for this one 21:02:04 request. It may be a float (in seconds) or an instance of 21:02:04 :class:`urllib3.util.Timeout`. 21:02:04 21:02:04 :param pool_timeout: 21:02:04 If set and the pool is set to block=True, then this method will 21:02:04 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 21:02:04 connection is available within the time period. 21:02:04 21:02:04 :param bool preload_content: 21:02:04 If True, the response's body will be preloaded into memory. 21:02:04 21:02:04 :param bool decode_content: 21:02:04 If True, will attempt to decode the body based on the 21:02:04 'content-encoding' header. 21:02:04 21:02:04 :param release_conn: 21:02:04 If False, then the urlopen call will not release the connection 21:02:04 back into the pool once a response is received (but will release if 21:02:04 you read the entire contents of the response such as when 21:02:04 `preload_content=True`). This is useful if you're not preloading 21:02:04 the response's content immediately. You will need to call 21:02:04 ``r.release_conn()`` on the response ``r`` to return the connection 21:02:04 back into the pool. If None, it takes the value of ``preload_content`` 21:02:04 which defaults to ``True``. 21:02:04 21:02:04 :param bool chunked: 21:02:04 If True, urllib3 will send the body using chunked transfer 21:02:04 encoding. Otherwise, urllib3 will send the body using the standard 21:02:04 content-length form. Defaults to False. 21:02:04 21:02:04 :param int body_pos: 21:02:04 Position to seek to in file-like body in the event of a retry or 21:02:04 redirect. Typically this won't need to be set because urllib3 will 21:02:04 auto-populate the value when needed. 21:02:04 """ 21:02:04 parsed_url = parse_url(url) 21:02:04 destination_scheme = parsed_url.scheme 21:02:04 21:02:04 if headers is None: 21:02:04 headers = self.headers 21:02:04 21:02:04 if not isinstance(retries, Retry): 21:02:04 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 21:02:04 21:02:04 if release_conn is None: 21:02:04 release_conn = preload_content 21:02:04 21:02:04 # Check host 21:02:04 if assert_same_host and not self.is_same_host(url): 21:02:04 raise HostChangedError(self, url, retries) 21:02:04 21:02:04 # Ensure that the URL we're connecting to is properly encoded 21:02:04 if url.startswith("/"): 21:02:04 url = to_str(_encode_target(url)) 21:02:04 else: 21:02:04 url = to_str(parsed_url.url) 21:02:04 21:02:04 conn = None 21:02:04 21:02:04 # Track whether `conn` needs to be released before 21:02:04 # returning/raising/recursing. Update this variable if necessary, and 21:02:04 # leave `release_conn` constant throughout the function. That way, if 21:02:04 # the function recurses, the original value of `release_conn` will be 21:02:04 # passed down into the recursive call, and its value will be respected. 21:02:04 # 21:02:04 # See issue #651 [1] for details. 21:02:04 # 21:02:04 # [1] 21:02:04 release_this_conn = release_conn 21:02:04 21:02:04 http_tunnel_required = connection_requires_http_tunnel( 21:02:04 self.proxy, self.proxy_config, destination_scheme 21:02:04 ) 21:02:04 21:02:04 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 21:02:04 # have to copy the headers dict so we can safely change it without those 21:02:04 # changes being reflected in anyone else's copy. 21:02:04 if not http_tunnel_required: 21:02:04 headers = headers.copy() # type: ignore[attr-defined] 21:02:04 headers.update(self.proxy_headers) # type: ignore[union-attr] 21:02:04 21:02:04 # Must keep the exception bound to a separate variable or else Python 3 21:02:04 # complains about UnboundLocalError. 21:02:04 err = None 21:02:04 21:02:04 # Keep track of whether we cleanly exited the except block. This 21:02:04 # ensures we do proper cleanup in finally. 21:02:04 clean_exit = False 21:02:04 21:02:04 # Rewind body position, if needed. Record current position 21:02:04 # for future rewinds in the event of a redirect/retry. 21:02:04 body_pos = set_file_position(body, body_pos) 21:02:04 21:02:04 try: 21:02:04 # Request a connection from the queue. 21:02:04 timeout_obj = self._get_timeout(timeout) 21:02:04 conn = self._get_conn(timeout=pool_timeout) 21:02:04 21:02:04 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 21:02:04 21:02:04 # Is this a closed/new connection that requires CONNECT tunnelling? 21:02:04 if self.proxy is not None and http_tunnel_required and conn.is_closed: 21:02:04 try: 21:02:04 self._prepare_proxy(conn) 21:02:04 except (BaseSSLError, OSError, SocketTimeout) as e: 21:02:04 self._raise_timeout( 21:02:04 err=e, url=self.proxy.url, timeout_value=conn.timeout 21:02:04 ) 21:02:04 raise 21:02:04 21:02:04 # If we're going to release the connection in ``finally:``, then 21:02:04 # the response doesn't need to know about the connection. Otherwise 21:02:04 # it will also try to release it and we'll have a double-release 21:02:04 # mess. 21:02:04 response_conn = conn if not release_conn else None 21:02:04 21:02:04 # Make the request on the HTTPConnection object 21:02:04 > response = self._make_request( 21:02:04 conn, 21:02:04 method, 21:02:04 url, 21:02:04 timeout=timeout_obj, 21:02:04 body=body, 21:02:04 headers=headers, 21:02:04 chunked=chunked, 21:02:04 retries=retries, 21:02:04 response_conn=response_conn, 21:02:04 preload_content=preload_content, 21:02:04 decode_content=decode_content, 21:02:04 **response_kw, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 21:02:04 conn.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 21:02:04 self.endheaders() 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 21:02:04 self._send_output(message_body, encode_chunked=encode_chunked) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 21:02:04 self.send(msg) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 21:02:04 self.connect() 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 21:02:04 self.sock = self._new_conn() 21:02:04 ^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 except socket.gaierror as e: 21:02:04 raise NameResolutionError(self.host, self, e) from e 21:02:04 except SocketTimeout as e: 21:02:04 raise ConnectTimeoutError( 21:02:04 self, 21:02:04 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 21:02:04 ) from e 21:02:04 21:02:04 except OSError as e: 21:02:04 > raise NewConnectionError( 21:02:04 self, f"Failed to establish a new connection: {e}" 21:02:04 ) from e 21:02:04 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 > resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 21:02:04 retries = retries.increment( 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG3-PP1-TXRX' 21:02:04 response = None 21:02:04 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 21:02:04 _pool = 21:02:04 _stacktrace = 21:02:04 21:02:04 def increment( 21:02:04 self, 21:02:04 method: str | None = None, 21:02:04 url: str | None = None, 21:02:04 response: BaseHTTPResponse | None = None, 21:02:04 error: Exception | None = None, 21:02:04 _pool: ConnectionPool | None = None, 21:02:04 _stacktrace: TracebackType | None = None, 21:02:04 ) -> Self: 21:02:04 """Return a new Retry object with incremented retry counters. 21:02:04 21:02:04 :param response: A response object, or None, if the server did not 21:02:04 return a response. 21:02:04 :type response: :class:`~urllib3.response.BaseHTTPResponse` 21:02:04 :param Exception error: An error encountered during the request, or 21:02:04 None if the response was received successfully. 21:02:04 21:02:04 :return: A new ``Retry`` object. 21:02:04 """ 21:02:04 if self.total is False and error: 21:02:04 # Disabled, indicate to re-raise the error. 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 21:02:04 total = self.total 21:02:04 if total is not None: 21:02:04 total -= 1 21:02:04 21:02:04 connect = self.connect 21:02:04 read = self.read 21:02:04 redirect = self.redirect 21:02:04 status_count = self.status 21:02:04 other = self.other 21:02:04 cause = "unknown" 21:02:04 status = None 21:02:04 redirect_location = None 21:02:04 21:02:04 if error and self._is_connection_error(error): 21:02:04 # Connect retry? 21:02:04 if connect is False: 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif connect is not None: 21:02:04 connect -= 1 21:02:04 21:02:04 elif error and self._is_read_error(error): 21:02:04 # Read retry? 21:02:04 if read is False or method is None or not self._is_method_retryable(method): 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif read is not None: 21:02:04 read -= 1 21:02:04 21:02:04 elif error: 21:02:04 # Other retry? 21:02:04 if other is not None: 21:02:04 other -= 1 21:02:04 21:02:04 elif response and response.get_redirect_location(): 21:02:04 # Redirect retry? 21:02:04 if redirect is not None: 21:02:04 redirect -= 1 21:02:04 cause = "too many redirects" 21:02:04 response_redirect_location = response.get_redirect_location() 21:02:04 if response_redirect_location: 21:02:04 redirect_location = response_redirect_location 21:02:04 status = response.status 21:02:04 21:02:04 else: 21:02:04 # Incrementing because of a server error like a 500 in 21:02:04 # status_forcelist and the given method is in the allowed_methods 21:02:04 cause = ResponseError.GENERIC_ERROR 21:02:04 if response and response.status: 21:02:04 if status_count is not None: 21:02:04 status_count -= 1 21:02:04 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 21:02:04 status = response.status 21:02:04 21:02:04 history = self.history + ( 21:02:04 RequestHistory(method, url, error, status, redirect_location), 21:02:04 ) 21:02:04 21:02:04 new_retry = self.new( 21:02:04 total=total, 21:02:04 connect=connect, 21:02:04 read=read, 21:02:04 redirect=redirect, 21:02:04 status=status_count, 21:02:04 other=other, 21:02:04 history=history, 21:02:04 ) 21:02:04 21:02:04 if new_retry.is_exhausted(): 21:02:04 reason = error or ResponseError(cause) 21:02:04 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG3-PP1-TXRX (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 21:02:04 21:02:04 During handling of the above exception, another exception occurred: 21:02:04 21:02:04 self = 21:02:04 21:02:04 def test_06_rdm_portmapping_SRG3_PP1_TXRX(self): 21:02:04 > response = test_utils.get_portmapping_node_attr("ROADMA01", "mapping", "SRG3-PP1-TXRX") 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 21:02:04 transportpce_tests/1.2.1/test01_portmapping.py:91: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 21:02:04 response = get_request(target_url) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 transportpce_tests/common/test_utils.py:117: in get_request 21:02:04 return requests.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 21:02:04 return session.request(method=method, url=url, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 21:02:04 resp = self.send(prep, **send_kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 21:02:04 r = adapter.send(request, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 except (ProtocolError, OSError) as err: 21:02:04 raise ConnectionError(err, request=request) 21:02:04 21:02:04 except MaxRetryError as e: 21:02:04 if isinstance(e.reason, ConnectTimeoutError): 21:02:04 # TODO: Remove this in 3.0.0: see #2811 21:02:04 if not isinstance(e.reason, NewConnectionError): 21:02:04 raise ConnectTimeout(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, ResponseError): 21:02:04 raise RetryError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _ProxyError): 21:02:04 raise ProxyError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _SSLError): 21:02:04 # This branch is for urllib3 v1.22 and later. 21:02:04 raise SSLError(e, request=request) 21:02:04 21:02:04 > raise ConnectionError(e, request=request) 21:02:04 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/mapping=SRG3-PP1-TXRX (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 21:02:04 ----------------------------- Captured stdout call ----------------------------- 21:02:04 execution of test_06_rdm_portmapping_SRG3_PP1_TXRX 21:02:04 __________ TestTransportPCEPortmapping.test_07_xpdr_device_connection __________ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 > sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 21:02:04 raise err 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 address = ('localhost', 8191), timeout = 30, source_address = None 21:02:04 socket_options = [(6, 1, 1)] 21:02:04 21:02:04 def create_connection( 21:02:04 address: tuple[str, int], 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 source_address: tuple[str, int] | None = None, 21:02:04 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 21:02:04 ) -> socket.socket: 21:02:04 """Connect to *address* and return the socket object. 21:02:04 21:02:04 Convenience function. Connect to *address* (a 2-tuple ``(host, 21:02:04 port)``) and return the socket object. Passing the optional 21:02:04 *timeout* parameter will set the timeout on the socket instance 21:02:04 before attempting to connect. If no *timeout* is supplied, the 21:02:04 global default timeout setting returned by :func:`socket.getdefaulttimeout` 21:02:04 is used. If *source_address* is set it must be a tuple of (host, port) 21:02:04 for the socket to bind as a source address before making the connection. 21:02:04 An host of '' or port 0 tells the OS to use the default. 21:02:04 """ 21:02:04 21:02:04 host, port = address 21:02:04 if host.startswith("["): 21:02:04 host = host.strip("[]") 21:02:04 err = None 21:02:04 21:02:04 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 21:02:04 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 21:02:04 # The original create_connection function always returns all records. 21:02:04 family = allowed_gai_family() 21:02:04 21:02:04 try: 21:02:04 host.encode("idna") 21:02:04 except UnicodeError: 21:02:04 raise LocationParseError(f"'{host}', label empty or too long") from None 21:02:04 21:02:04 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 21:02:04 af, socktype, proto, canonname, sa = res 21:02:04 sock = None 21:02:04 try: 21:02:04 sock = socket.socket(af, socktype, proto) 21:02:04 21:02:04 # If provided, set socket level options before connecting. 21:02:04 _set_socket_options(sock, socket_options) 21:02:04 21:02:04 if timeout is not _DEFAULT_TIMEOUT: 21:02:04 sock.settimeout(timeout) 21:02:04 if source_address: 21:02:04 sock.bind(source_address) 21:02:04 > sock.connect(sa) 21:02:04 E ConnectionRefusedError: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 method = 'PUT' 21:02:04 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01' 21:02:04 body = '{"node": [{"node-id": "XPDRA01", "netconf-node-topology:netconf-node": {"netconf-node-topology:host": "127.0.0.1", "n...ff-millis": 1800000, "netconf-node-topology:backoff-multiplier": 1.5, "netconf-node-topology:keepalive-delay": 120}}]}' 21:02:04 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '709', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 21:02:04 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 redirect = False, assert_same_host = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 21:02:04 release_conn = False, chunked = False, body_pos = None, preload_content = False 21:02:04 decode_content = False, response_kw = {} 21:02:04 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01', query=None, fragment=None) 21:02:04 destination_scheme = None, conn = None, release_this_conn = True 21:02:04 http_tunnel_required = False, err = None, clean_exit = False 21:02:04 21:02:04 def urlopen( # type: ignore[override] 21:02:04 self, 21:02:04 method: str, 21:02:04 url: str, 21:02:04 body: _TYPE_BODY | None = None, 21:02:04 headers: typing.Mapping[str, str] | None = None, 21:02:04 retries: Retry | bool | int | None = None, 21:02:04 redirect: bool = True, 21:02:04 assert_same_host: bool = True, 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 pool_timeout: int | None = None, 21:02:04 release_conn: bool | None = None, 21:02:04 chunked: bool = False, 21:02:04 body_pos: _TYPE_BODY_POSITION | None = None, 21:02:04 preload_content: bool = True, 21:02:04 decode_content: bool = True, 21:02:04 **response_kw: typing.Any, 21:02:04 ) -> BaseHTTPResponse: 21:02:04 """ 21:02:04 Get a connection from the pool and perform an HTTP request. This is the 21:02:04 lowest level call for making a request, so you'll need to specify all 21:02:04 the raw details. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 More commonly, it's appropriate to use a convenience method 21:02:04 such as :meth:`request`. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 `release_conn` will only behave as expected if 21:02:04 `preload_content=False` because we want to make 21:02:04 `preload_content=False` the default behaviour someday soon without 21:02:04 breaking backwards compatibility. 21:02:04 21:02:04 :param method: 21:02:04 HTTP request method (such as GET, POST, PUT, etc.) 21:02:04 21:02:04 :param url: 21:02:04 The URL to perform the request on. 21:02:04 21:02:04 :param body: 21:02:04 Data to send in the request body, either :class:`str`, :class:`bytes`, 21:02:04 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 21:02:04 21:02:04 :param headers: 21:02:04 Dictionary of custom headers to send, such as User-Agent, 21:02:04 If-None-Match, etc. If None, pool headers are used. If provided, 21:02:04 these headers completely replace any pool-specific headers. 21:02:04 21:02:04 :param retries: 21:02:04 Configure the number of retries to allow before raising a 21:02:04 :class:`~urllib3.exceptions.MaxRetryError` exception. 21:02:04 21:02:04 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 21:02:04 :class:`~urllib3.util.retry.Retry` object for fine-grained control 21:02:04 over different types of retries. 21:02:04 Pass an integer number to retry connection errors that many times, 21:02:04 but no other types of errors. Pass zero to never retry. 21:02:04 21:02:04 If ``False``, then retries are disabled and any exception is raised 21:02:04 immediately. Also, instead of raising a MaxRetryError on redirects, 21:02:04 the redirect response will be returned. 21:02:04 21:02:04 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 21:02:04 21:02:04 :param redirect: 21:02:04 If True, automatically handle redirects (status codes 301, 302, 21:02:04 303, 307, 308). Each redirect counts as a retry. Disabling retries 21:02:04 will disable redirect, too. 21:02:04 21:02:04 :param assert_same_host: 21:02:04 If ``True``, will make sure that the host of the pool requests is 21:02:04 consistent else will raise HostChangedError. When ``False``, you can 21:02:04 use the pool on an HTTP proxy and request foreign hosts. 21:02:04 21:02:04 :param timeout: 21:02:04 If specified, overrides the default timeout for this one 21:02:04 request. It may be a float (in seconds) or an instance of 21:02:04 :class:`urllib3.util.Timeout`. 21:02:04 21:02:04 :param pool_timeout: 21:02:04 If set and the pool is set to block=True, then this method will 21:02:04 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 21:02:04 connection is available within the time period. 21:02:04 21:02:04 :param bool preload_content: 21:02:04 If True, the response's body will be preloaded into memory. 21:02:04 21:02:04 :param bool decode_content: 21:02:04 If True, will attempt to decode the body based on the 21:02:04 'content-encoding' header. 21:02:04 21:02:04 :param release_conn: 21:02:04 If False, then the urlopen call will not release the connection 21:02:04 back into the pool once a response is received (but will release if 21:02:04 you read the entire contents of the response such as when 21:02:04 `preload_content=True`). This is useful if you're not preloading 21:02:04 the response's content immediately. You will need to call 21:02:04 ``r.release_conn()`` on the response ``r`` to return the connection 21:02:04 back into the pool. If None, it takes the value of ``preload_content`` 21:02:04 which defaults to ``True``. 21:02:04 21:02:04 :param bool chunked: 21:02:04 If True, urllib3 will send the body using chunked transfer 21:02:04 encoding. Otherwise, urllib3 will send the body using the standard 21:02:04 content-length form. Defaults to False. 21:02:04 21:02:04 :param int body_pos: 21:02:04 Position to seek to in file-like body in the event of a retry or 21:02:04 redirect. Typically this won't need to be set because urllib3 will 21:02:04 auto-populate the value when needed. 21:02:04 """ 21:02:04 parsed_url = parse_url(url) 21:02:04 destination_scheme = parsed_url.scheme 21:02:04 21:02:04 if headers is None: 21:02:04 headers = self.headers 21:02:04 21:02:04 if not isinstance(retries, Retry): 21:02:04 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 21:02:04 21:02:04 if release_conn is None: 21:02:04 release_conn = preload_content 21:02:04 21:02:04 # Check host 21:02:04 if assert_same_host and not self.is_same_host(url): 21:02:04 raise HostChangedError(self, url, retries) 21:02:04 21:02:04 # Ensure that the URL we're connecting to is properly encoded 21:02:04 if url.startswith("/"): 21:02:04 url = to_str(_encode_target(url)) 21:02:04 else: 21:02:04 url = to_str(parsed_url.url) 21:02:04 21:02:04 conn = None 21:02:04 21:02:04 # Track whether `conn` needs to be released before 21:02:04 # returning/raising/recursing. Update this variable if necessary, and 21:02:04 # leave `release_conn` constant throughout the function. That way, if 21:02:04 # the function recurses, the original value of `release_conn` will be 21:02:04 # passed down into the recursive call, and its value will be respected. 21:02:04 # 21:02:04 # See issue #651 [1] for details. 21:02:04 # 21:02:04 # [1] 21:02:04 release_this_conn = release_conn 21:02:04 21:02:04 http_tunnel_required = connection_requires_http_tunnel( 21:02:04 self.proxy, self.proxy_config, destination_scheme 21:02:04 ) 21:02:04 21:02:04 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 21:02:04 # have to copy the headers dict so we can safely change it without those 21:02:04 # changes being reflected in anyone else's copy. 21:02:04 if not http_tunnel_required: 21:02:04 headers = headers.copy() # type: ignore[attr-defined] 21:02:04 headers.update(self.proxy_headers) # type: ignore[union-attr] 21:02:04 21:02:04 # Must keep the exception bound to a separate variable or else Python 3 21:02:04 # complains about UnboundLocalError. 21:02:04 err = None 21:02:04 21:02:04 # Keep track of whether we cleanly exited the except block. This 21:02:04 # ensures we do proper cleanup in finally. 21:02:04 clean_exit = False 21:02:04 21:02:04 # Rewind body position, if needed. Record current position 21:02:04 # for future rewinds in the event of a redirect/retry. 21:02:04 body_pos = set_file_position(body, body_pos) 21:02:04 21:02:04 try: 21:02:04 # Request a connection from the queue. 21:02:04 timeout_obj = self._get_timeout(timeout) 21:02:04 conn = self._get_conn(timeout=pool_timeout) 21:02:04 21:02:04 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 21:02:04 21:02:04 # Is this a closed/new connection that requires CONNECT tunnelling? 21:02:04 if self.proxy is not None and http_tunnel_required and conn.is_closed: 21:02:04 try: 21:02:04 self._prepare_proxy(conn) 21:02:04 except (BaseSSLError, OSError, SocketTimeout) as e: 21:02:04 self._raise_timeout( 21:02:04 err=e, url=self.proxy.url, timeout_value=conn.timeout 21:02:04 ) 21:02:04 raise 21:02:04 21:02:04 # If we're going to release the connection in ``finally:``, then 21:02:04 # the response doesn't need to know about the connection. Otherwise 21:02:04 # it will also try to release it and we'll have a double-release 21:02:04 # mess. 21:02:04 response_conn = conn if not release_conn else None 21:02:04 21:02:04 # Make the request on the HTTPConnection object 21:02:04 > response = self._make_request( 21:02:04 conn, 21:02:04 method, 21:02:04 url, 21:02:04 timeout=timeout_obj, 21:02:04 body=body, 21:02:04 headers=headers, 21:02:04 chunked=chunked, 21:02:04 retries=retries, 21:02:04 response_conn=response_conn, 21:02:04 preload_content=preload_content, 21:02:04 decode_content=decode_content, 21:02:04 **response_kw, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 21:02:04 conn.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 21:02:04 self.endheaders() 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 21:02:04 self._send_output(message_body, encode_chunked=encode_chunked) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 21:02:04 self.send(msg) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 21:02:04 self.connect() 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 21:02:04 self.sock = self._new_conn() 21:02:04 ^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 except socket.gaierror as e: 21:02:04 raise NameResolutionError(self.host, self, e) from e 21:02:04 except SocketTimeout as e: 21:02:04 raise ConnectTimeoutError( 21:02:04 self, 21:02:04 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 21:02:04 ) from e 21:02:04 21:02:04 except OSError as e: 21:02:04 > raise NewConnectionError( 21:02:04 self, f"Failed to establish a new connection: {e}" 21:02:04 ) from e 21:02:04 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 > resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 21:02:04 retries = retries.increment( 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 method = 'PUT' 21:02:04 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01' 21:02:04 response = None 21:02:04 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 21:02:04 _pool = 21:02:04 _stacktrace = 21:02:04 21:02:04 def increment( 21:02:04 self, 21:02:04 method: str | None = None, 21:02:04 url: str | None = None, 21:02:04 response: BaseHTTPResponse | None = None, 21:02:04 error: Exception | None = None, 21:02:04 _pool: ConnectionPool | None = None, 21:02:04 _stacktrace: TracebackType | None = None, 21:02:04 ) -> Self: 21:02:04 """Return a new Retry object with incremented retry counters. 21:02:04 21:02:04 :param response: A response object, or None, if the server did not 21:02:04 return a response. 21:02:04 :type response: :class:`~urllib3.response.BaseHTTPResponse` 21:02:04 :param Exception error: An error encountered during the request, or 21:02:04 None if the response was received successfully. 21:02:04 21:02:04 :return: A new ``Retry`` object. 21:02:04 """ 21:02:04 if self.total is False and error: 21:02:04 # Disabled, indicate to re-raise the error. 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 21:02:04 total = self.total 21:02:04 if total is not None: 21:02:04 total -= 1 21:02:04 21:02:04 connect = self.connect 21:02:04 read = self.read 21:02:04 redirect = self.redirect 21:02:04 status_count = self.status 21:02:04 other = self.other 21:02:04 cause = "unknown" 21:02:04 status = None 21:02:04 redirect_location = None 21:02:04 21:02:04 if error and self._is_connection_error(error): 21:02:04 # Connect retry? 21:02:04 if connect is False: 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif connect is not None: 21:02:04 connect -= 1 21:02:04 21:02:04 elif error and self._is_read_error(error): 21:02:04 # Read retry? 21:02:04 if read is False or method is None or not self._is_method_retryable(method): 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif read is not None: 21:02:04 read -= 1 21:02:04 21:02:04 elif error: 21:02:04 # Other retry? 21:02:04 if other is not None: 21:02:04 other -= 1 21:02:04 21:02:04 elif response and response.get_redirect_location(): 21:02:04 # Redirect retry? 21:02:04 if redirect is not None: 21:02:04 redirect -= 1 21:02:04 cause = "too many redirects" 21:02:04 response_redirect_location = response.get_redirect_location() 21:02:04 if response_redirect_location: 21:02:04 redirect_location = response_redirect_location 21:02:04 status = response.status 21:02:04 21:02:04 else: 21:02:04 # Incrementing because of a server error like a 500 in 21:02:04 # status_forcelist and the given method is in the allowed_methods 21:02:04 cause = ResponseError.GENERIC_ERROR 21:02:04 if response and response.status: 21:02:04 if status_count is not None: 21:02:04 status_count -= 1 21:02:04 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 21:02:04 status = response.status 21:02:04 21:02:04 history = self.history + ( 21:02:04 RequestHistory(method, url, error, status, redirect_location), 21:02:04 ) 21:02:04 21:02:04 new_retry = self.new( 21:02:04 total=total, 21:02:04 connect=connect, 21:02:04 read=read, 21:02:04 redirect=redirect, 21:02:04 status=status_count, 21:02:04 other=other, 21:02:04 history=history, 21:02:04 ) 21:02:04 21:02:04 if new_retry.is_exhausted(): 21:02:04 reason = error or ResponseError(cause) 21:02:04 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 21:02:04 21:02:04 During handling of the above exception, another exception occurred: 21:02:04 21:02:04 self = 21:02:04 21:02:04 def test_07_xpdr_device_connection(self): 21:02:04 > response = test_utils.mount_device("XPDRA01", ('xpdra', self.NODE_VERSION)) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 21:02:04 transportpce_tests/1.2.1/test01_portmapping.py:100: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 transportpce_tests/common/test_utils.py:381: in mount_device 21:02:04 response = put_request(url[RESTCONF_VERSION].format('{}', node), body) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 transportpce_tests/common/test_utils.py:125: in put_request 21:02:04 return requests.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 21:02:04 return session.request(method=method, url=url, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 21:02:04 resp = self.send(prep, **send_kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 21:02:04 r = adapter.send(request, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 except (ProtocolError, OSError) as err: 21:02:04 raise ConnectionError(err, request=request) 21:02:04 21:02:04 except MaxRetryError as e: 21:02:04 if isinstance(e.reason, ConnectTimeoutError): 21:02:04 # TODO: Remove this in 3.0.0: see #2811 21:02:04 if not isinstance(e.reason, NewConnectionError): 21:02:04 raise ConnectTimeout(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, ResponseError): 21:02:04 raise RetryError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _ProxyError): 21:02:04 raise ProxyError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _SSLError): 21:02:04 # This branch is for urllib3 v1.22 and later. 21:02:04 raise SSLError(e, request=request) 21:02:04 21:02:04 > raise ConnectionError(e, request=request) 21:02:04 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 21:02:04 ----------------------------- Captured stdout call ----------------------------- 21:02:04 execution of test_07_xpdr_device_connection 21:02:04 __________ TestTransportPCEPortmapping.test_08_xpdr_device_connected ___________ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 > sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 21:02:04 raise err 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 address = ('localhost', 8191), timeout = 30, source_address = None 21:02:04 socket_options = [(6, 1, 1)] 21:02:04 21:02:04 def create_connection( 21:02:04 address: tuple[str, int], 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 source_address: tuple[str, int] | None = None, 21:02:04 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 21:02:04 ) -> socket.socket: 21:02:04 """Connect to *address* and return the socket object. 21:02:04 21:02:04 Convenience function. Connect to *address* (a 2-tuple ``(host, 21:02:04 port)``) and return the socket object. Passing the optional 21:02:04 *timeout* parameter will set the timeout on the socket instance 21:02:04 before attempting to connect. If no *timeout* is supplied, the 21:02:04 global default timeout setting returned by :func:`socket.getdefaulttimeout` 21:02:04 is used. If *source_address* is set it must be a tuple of (host, port) 21:02:04 for the socket to bind as a source address before making the connection. 21:02:04 An host of '' or port 0 tells the OS to use the default. 21:02:04 """ 21:02:04 21:02:04 host, port = address 21:02:04 if host.startswith("["): 21:02:04 host = host.strip("[]") 21:02:04 err = None 21:02:04 21:02:04 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 21:02:04 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 21:02:04 # The original create_connection function always returns all records. 21:02:04 family = allowed_gai_family() 21:02:04 21:02:04 try: 21:02:04 host.encode("idna") 21:02:04 except UnicodeError: 21:02:04 raise LocationParseError(f"'{host}', label empty or too long") from None 21:02:04 21:02:04 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 21:02:04 af, socktype, proto, canonname, sa = res 21:02:04 sock = None 21:02:04 try: 21:02:04 sock = socket.socket(af, socktype, proto) 21:02:04 21:02:04 # If provided, set socket level options before connecting. 21:02:04 _set_socket_options(sock, socket_options) 21:02:04 21:02:04 if timeout is not _DEFAULT_TIMEOUT: 21:02:04 sock.settimeout(timeout) 21:02:04 if source_address: 21:02:04 sock.bind(source_address) 21:02:04 > sock.connect(sa) 21:02:04 E ConnectionRefusedError: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig' 21:02:04 body = None 21:02:04 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 21:02:04 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 redirect = False, assert_same_host = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 21:02:04 release_conn = False, chunked = False, body_pos = None, preload_content = False 21:02:04 decode_content = False, response_kw = {} 21:02:04 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01', query='content=nonconfig', fragment=None) 21:02:04 destination_scheme = None, conn = None, release_this_conn = True 21:02:04 http_tunnel_required = False, err = None, clean_exit = False 21:02:04 21:02:04 def urlopen( # type: ignore[override] 21:02:04 self, 21:02:04 method: str, 21:02:04 url: str, 21:02:04 body: _TYPE_BODY | None = None, 21:02:04 headers: typing.Mapping[str, str] | None = None, 21:02:04 retries: Retry | bool | int | None = None, 21:02:04 redirect: bool = True, 21:02:04 assert_same_host: bool = True, 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 pool_timeout: int | None = None, 21:02:04 release_conn: bool | None = None, 21:02:04 chunked: bool = False, 21:02:04 body_pos: _TYPE_BODY_POSITION | None = None, 21:02:04 preload_content: bool = True, 21:02:04 decode_content: bool = True, 21:02:04 **response_kw: typing.Any, 21:02:04 ) -> BaseHTTPResponse: 21:02:04 """ 21:02:04 Get a connection from the pool and perform an HTTP request. This is the 21:02:04 lowest level call for making a request, so you'll need to specify all 21:02:04 the raw details. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 More commonly, it's appropriate to use a convenience method 21:02:04 such as :meth:`request`. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 `release_conn` will only behave as expected if 21:02:04 `preload_content=False` because we want to make 21:02:04 `preload_content=False` the default behaviour someday soon without 21:02:04 breaking backwards compatibility. 21:02:04 21:02:04 :param method: 21:02:04 HTTP request method (such as GET, POST, PUT, etc.) 21:02:04 21:02:04 :param url: 21:02:04 The URL to perform the request on. 21:02:04 21:02:04 :param body: 21:02:04 Data to send in the request body, either :class:`str`, :class:`bytes`, 21:02:04 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 21:02:04 21:02:04 :param headers: 21:02:04 Dictionary of custom headers to send, such as User-Agent, 21:02:04 If-None-Match, etc. If None, pool headers are used. If provided, 21:02:04 these headers completely replace any pool-specific headers. 21:02:04 21:02:04 :param retries: 21:02:04 Configure the number of retries to allow before raising a 21:02:04 :class:`~urllib3.exceptions.MaxRetryError` exception. 21:02:04 21:02:04 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 21:02:04 :class:`~urllib3.util.retry.Retry` object for fine-grained control 21:02:04 over different types of retries. 21:02:04 Pass an integer number to retry connection errors that many times, 21:02:04 but no other types of errors. Pass zero to never retry. 21:02:04 21:02:04 If ``False``, then retries are disabled and any exception is raised 21:02:04 immediately. Also, instead of raising a MaxRetryError on redirects, 21:02:04 the redirect response will be returned. 21:02:04 21:02:04 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 21:02:04 21:02:04 :param redirect: 21:02:04 If True, automatically handle redirects (status codes 301, 302, 21:02:04 303, 307, 308). Each redirect counts as a retry. Disabling retries 21:02:04 will disable redirect, too. 21:02:04 21:02:04 :param assert_same_host: 21:02:04 If ``True``, will make sure that the host of the pool requests is 21:02:04 consistent else will raise HostChangedError. When ``False``, you can 21:02:04 use the pool on an HTTP proxy and request foreign hosts. 21:02:04 21:02:04 :param timeout: 21:02:04 If specified, overrides the default timeout for this one 21:02:04 request. It may be a float (in seconds) or an instance of 21:02:04 :class:`urllib3.util.Timeout`. 21:02:04 21:02:04 :param pool_timeout: 21:02:04 If set and the pool is set to block=True, then this method will 21:02:04 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 21:02:04 connection is available within the time period. 21:02:04 21:02:04 :param bool preload_content: 21:02:04 If True, the response's body will be preloaded into memory. 21:02:04 21:02:04 :param bool decode_content: 21:02:04 If True, will attempt to decode the body based on the 21:02:04 'content-encoding' header. 21:02:04 21:02:04 :param release_conn: 21:02:04 If False, then the urlopen call will not release the connection 21:02:04 back into the pool once a response is received (but will release if 21:02:04 you read the entire contents of the response such as when 21:02:04 `preload_content=True`). This is useful if you're not preloading 21:02:04 the response's content immediately. You will need to call 21:02:04 ``r.release_conn()`` on the response ``r`` to return the connection 21:02:04 back into the pool. If None, it takes the value of ``preload_content`` 21:02:04 which defaults to ``True``. 21:02:04 21:02:04 :param bool chunked: 21:02:04 If True, urllib3 will send the body using chunked transfer 21:02:04 encoding. Otherwise, urllib3 will send the body using the standard 21:02:04 content-length form. Defaults to False. 21:02:04 21:02:04 :param int body_pos: 21:02:04 Position to seek to in file-like body in the event of a retry or 21:02:04 redirect. Typically this won't need to be set because urllib3 will 21:02:04 auto-populate the value when needed. 21:02:04 """ 21:02:04 parsed_url = parse_url(url) 21:02:04 destination_scheme = parsed_url.scheme 21:02:04 21:02:04 if headers is None: 21:02:04 headers = self.headers 21:02:04 21:02:04 if not isinstance(retries, Retry): 21:02:04 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 21:02:04 21:02:04 if release_conn is None: 21:02:04 release_conn = preload_content 21:02:04 21:02:04 # Check host 21:02:04 if assert_same_host and not self.is_same_host(url): 21:02:04 raise HostChangedError(self, url, retries) 21:02:04 21:02:04 # Ensure that the URL we're connecting to is properly encoded 21:02:04 if url.startswith("/"): 21:02:04 url = to_str(_encode_target(url)) 21:02:04 else: 21:02:04 url = to_str(parsed_url.url) 21:02:04 21:02:04 conn = None 21:02:04 21:02:04 # Track whether `conn` needs to be released before 21:02:04 # returning/raising/recursing. Update this variable if necessary, and 21:02:04 # leave `release_conn` constant throughout the function. That way, if 21:02:04 # the function recurses, the original value of `release_conn` will be 21:02:04 # passed down into the recursive call, and its value will be respected. 21:02:04 # 21:02:04 # See issue #651 [1] for details. 21:02:04 # 21:02:04 # [1] 21:02:04 release_this_conn = release_conn 21:02:04 21:02:04 http_tunnel_required = connection_requires_http_tunnel( 21:02:04 self.proxy, self.proxy_config, destination_scheme 21:02:04 ) 21:02:04 21:02:04 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 21:02:04 # have to copy the headers dict so we can safely change it without those 21:02:04 # changes being reflected in anyone else's copy. 21:02:04 if not http_tunnel_required: 21:02:04 headers = headers.copy() # type: ignore[attr-defined] 21:02:04 headers.update(self.proxy_headers) # type: ignore[union-attr] 21:02:04 21:02:04 # Must keep the exception bound to a separate variable or else Python 3 21:02:04 # complains about UnboundLocalError. 21:02:04 err = None 21:02:04 21:02:04 # Keep track of whether we cleanly exited the except block. This 21:02:04 # ensures we do proper cleanup in finally. 21:02:04 clean_exit = False 21:02:04 21:02:04 # Rewind body position, if needed. Record current position 21:02:04 # for future rewinds in the event of a redirect/retry. 21:02:04 body_pos = set_file_position(body, body_pos) 21:02:04 21:02:04 try: 21:02:04 # Request a connection from the queue. 21:02:04 timeout_obj = self._get_timeout(timeout) 21:02:04 conn = self._get_conn(timeout=pool_timeout) 21:02:04 21:02:04 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 21:02:04 21:02:04 # Is this a closed/new connection that requires CONNECT tunnelling? 21:02:04 if self.proxy is not None and http_tunnel_required and conn.is_closed: 21:02:04 try: 21:02:04 self._prepare_proxy(conn) 21:02:04 except (BaseSSLError, OSError, SocketTimeout) as e: 21:02:04 self._raise_timeout( 21:02:04 err=e, url=self.proxy.url, timeout_value=conn.timeout 21:02:04 ) 21:02:04 raise 21:02:04 21:02:04 # If we're going to release the connection in ``finally:``, then 21:02:04 # the response doesn't need to know about the connection. Otherwise 21:02:04 # it will also try to release it and we'll have a double-release 21:02:04 # mess. 21:02:04 response_conn = conn if not release_conn else None 21:02:04 21:02:04 # Make the request on the HTTPConnection object 21:02:04 > response = self._make_request( 21:02:04 conn, 21:02:04 method, 21:02:04 url, 21:02:04 timeout=timeout_obj, 21:02:04 body=body, 21:02:04 headers=headers, 21:02:04 chunked=chunked, 21:02:04 retries=retries, 21:02:04 response_conn=response_conn, 21:02:04 preload_content=preload_content, 21:02:04 decode_content=decode_content, 21:02:04 **response_kw, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 21:02:04 conn.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 21:02:04 self.endheaders() 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 21:02:04 self._send_output(message_body, encode_chunked=encode_chunked) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 21:02:04 self.send(msg) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 21:02:04 self.connect() 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 21:02:04 self.sock = self._new_conn() 21:02:04 ^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 except socket.gaierror as e: 21:02:04 raise NameResolutionError(self.host, self, e) from e 21:02:04 except SocketTimeout as e: 21:02:04 raise ConnectTimeoutError( 21:02:04 self, 21:02:04 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 21:02:04 ) from e 21:02:04 21:02:04 except OSError as e: 21:02:04 > raise NewConnectionError( 21:02:04 self, f"Failed to establish a new connection: {e}" 21:02:04 ) from e 21:02:04 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 > resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 21:02:04 retries = retries.increment( 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig' 21:02:04 response = None 21:02:04 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 21:02:04 _pool = 21:02:04 _stacktrace = 21:02:04 21:02:04 def increment( 21:02:04 self, 21:02:04 method: str | None = None, 21:02:04 url: str | None = None, 21:02:04 response: BaseHTTPResponse | None = None, 21:02:04 error: Exception | None = None, 21:02:04 _pool: ConnectionPool | None = None, 21:02:04 _stacktrace: TracebackType | None = None, 21:02:04 ) -> Self: 21:02:04 """Return a new Retry object with incremented retry counters. 21:02:04 21:02:04 :param response: A response object, or None, if the server did not 21:02:04 return a response. 21:02:04 :type response: :class:`~urllib3.response.BaseHTTPResponse` 21:02:04 :param Exception error: An error encountered during the request, or 21:02:04 None if the response was received successfully. 21:02:04 21:02:04 :return: A new ``Retry`` object. 21:02:04 """ 21:02:04 if self.total is False and error: 21:02:04 # Disabled, indicate to re-raise the error. 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 21:02:04 total = self.total 21:02:04 if total is not None: 21:02:04 total -= 1 21:02:04 21:02:04 connect = self.connect 21:02:04 read = self.read 21:02:04 redirect = self.redirect 21:02:04 status_count = self.status 21:02:04 other = self.other 21:02:04 cause = "unknown" 21:02:04 status = None 21:02:04 redirect_location = None 21:02:04 21:02:04 if error and self._is_connection_error(error): 21:02:04 # Connect retry? 21:02:04 if connect is False: 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif connect is not None: 21:02:04 connect -= 1 21:02:04 21:02:04 elif error and self._is_read_error(error): 21:02:04 # Read retry? 21:02:04 if read is False or method is None or not self._is_method_retryable(method): 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif read is not None: 21:02:04 read -= 1 21:02:04 21:02:04 elif error: 21:02:04 # Other retry? 21:02:04 if other is not None: 21:02:04 other -= 1 21:02:04 21:02:04 elif response and response.get_redirect_location(): 21:02:04 # Redirect retry? 21:02:04 if redirect is not None: 21:02:04 redirect -= 1 21:02:04 cause = "too many redirects" 21:02:04 response_redirect_location = response.get_redirect_location() 21:02:04 if response_redirect_location: 21:02:04 redirect_location = response_redirect_location 21:02:04 status = response.status 21:02:04 21:02:04 else: 21:02:04 # Incrementing because of a server error like a 500 in 21:02:04 # status_forcelist and the given method is in the allowed_methods 21:02:04 cause = ResponseError.GENERIC_ERROR 21:02:04 if response and response.status: 21:02:04 if status_count is not None: 21:02:04 status_count -= 1 21:02:04 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 21:02:04 status = response.status 21:02:04 21:02:04 history = self.history + ( 21:02:04 RequestHistory(method, url, error, status, redirect_location), 21:02:04 ) 21:02:04 21:02:04 new_retry = self.new( 21:02:04 total=total, 21:02:04 connect=connect, 21:02:04 read=read, 21:02:04 redirect=redirect, 21:02:04 status=status_count, 21:02:04 other=other, 21:02:04 history=history, 21:02:04 ) 21:02:04 21:02:04 if new_retry.is_exhausted(): 21:02:04 reason = error or ResponseError(cause) 21:02:04 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 21:02:04 21:02:04 During handling of the above exception, another exception occurred: 21:02:04 21:02:04 self = 21:02:04 21:02:04 def test_08_xpdr_device_connected(self): 21:02:04 > response = test_utils.check_device_connection("XPDRA01") 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 21:02:04 transportpce_tests/1.2.1/test01_portmapping.py:104: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 transportpce_tests/common/test_utils.py:409: in check_device_connection 21:02:04 response = get_request(url[RESTCONF_VERSION].format('{}', node)) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 transportpce_tests/common/test_utils.py:117: in get_request 21:02:04 return requests.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 21:02:04 return session.request(method=method, url=url, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 21:02:04 resp = self.send(prep, **send_kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 21:02:04 r = adapter.send(request, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 except (ProtocolError, OSError) as err: 21:02:04 raise ConnectionError(err, request=request) 21:02:04 21:02:04 except MaxRetryError as e: 21:02:04 if isinstance(e.reason, ConnectTimeoutError): 21:02:04 # TODO: Remove this in 3.0.0: see #2811 21:02:04 if not isinstance(e.reason, NewConnectionError): 21:02:04 raise ConnectTimeout(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, ResponseError): 21:02:04 raise RetryError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _ProxyError): 21:02:04 raise ProxyError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _SSLError): 21:02:04 # This branch is for urllib3 v1.22 and later. 21:02:04 raise SSLError(e, request=request) 21:02:04 21:02:04 > raise ConnectionError(e, request=request) 21:02:04 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 21:02:04 ----------------------------- Captured stdout call ----------------------------- 21:02:04 execution of test_08_xpdr_device_connected 21:02:04 __________ TestTransportPCEPortmapping.test_09_xpdr_portmapping_info ___________ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 > sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 21:02:04 raise err 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 address = ('localhost', 8191), timeout = 30, source_address = None 21:02:04 socket_options = [(6, 1, 1)] 21:02:04 21:02:04 def create_connection( 21:02:04 address: tuple[str, int], 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 source_address: tuple[str, int] | None = None, 21:02:04 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 21:02:04 ) -> socket.socket: 21:02:04 """Connect to *address* and return the socket object. 21:02:04 21:02:04 Convenience function. Connect to *address* (a 2-tuple ``(host, 21:02:04 port)``) and return the socket object. Passing the optional 21:02:04 *timeout* parameter will set the timeout on the socket instance 21:02:04 before attempting to connect. If no *timeout* is supplied, the 21:02:04 global default timeout setting returned by :func:`socket.getdefaulttimeout` 21:02:04 is used. If *source_address* is set it must be a tuple of (host, port) 21:02:04 for the socket to bind as a source address before making the connection. 21:02:04 An host of '' or port 0 tells the OS to use the default. 21:02:04 """ 21:02:04 21:02:04 host, port = address 21:02:04 if host.startswith("["): 21:02:04 host = host.strip("[]") 21:02:04 err = None 21:02:04 21:02:04 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 21:02:04 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 21:02:04 # The original create_connection function always returns all records. 21:02:04 family = allowed_gai_family() 21:02:04 21:02:04 try: 21:02:04 host.encode("idna") 21:02:04 except UnicodeError: 21:02:04 raise LocationParseError(f"'{host}', label empty or too long") from None 21:02:04 21:02:04 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 21:02:04 af, socktype, proto, canonname, sa = res 21:02:04 sock = None 21:02:04 try: 21:02:04 sock = socket.socket(af, socktype, proto) 21:02:04 21:02:04 # If provided, set socket level options before connecting. 21:02:04 _set_socket_options(sock, socket_options) 21:02:04 21:02:04 if timeout is not _DEFAULT_TIMEOUT: 21:02:04 sock.settimeout(timeout) 21:02:04 if source_address: 21:02:04 sock.bind(source_address) 21:02:04 > sock.connect(sa) 21:02:04 E ConnectionRefusedError: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info' 21:02:04 body = None 21:02:04 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 21:02:04 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 redirect = False, assert_same_host = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 21:02:04 release_conn = False, chunked = False, body_pos = None, preload_content = False 21:02:04 decode_content = False, response_kw = {} 21:02:04 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info', query=None, fragment=None) 21:02:04 destination_scheme = None, conn = None, release_this_conn = True 21:02:04 http_tunnel_required = False, err = None, clean_exit = False 21:02:04 21:02:04 def urlopen( # type: ignore[override] 21:02:04 self, 21:02:04 method: str, 21:02:04 url: str, 21:02:04 body: _TYPE_BODY | None = None, 21:02:04 headers: typing.Mapping[str, str] | None = None, 21:02:04 retries: Retry | bool | int | None = None, 21:02:04 redirect: bool = True, 21:02:04 assert_same_host: bool = True, 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 pool_timeout: int | None = None, 21:02:04 release_conn: bool | None = None, 21:02:04 chunked: bool = False, 21:02:04 body_pos: _TYPE_BODY_POSITION | None = None, 21:02:04 preload_content: bool = True, 21:02:04 decode_content: bool = True, 21:02:04 **response_kw: typing.Any, 21:02:04 ) -> BaseHTTPResponse: 21:02:04 """ 21:02:04 Get a connection from the pool and perform an HTTP request. This is the 21:02:04 lowest level call for making a request, so you'll need to specify all 21:02:04 the raw details. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 More commonly, it's appropriate to use a convenience method 21:02:04 such as :meth:`request`. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 `release_conn` will only behave as expected if 21:02:04 `preload_content=False` because we want to make 21:02:04 `preload_content=False` the default behaviour someday soon without 21:02:04 breaking backwards compatibility. 21:02:04 21:02:04 :param method: 21:02:04 HTTP request method (such as GET, POST, PUT, etc.) 21:02:04 21:02:04 :param url: 21:02:04 The URL to perform the request on. 21:02:04 21:02:04 :param body: 21:02:04 Data to send in the request body, either :class:`str`, :class:`bytes`, 21:02:04 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 21:02:04 21:02:04 :param headers: 21:02:04 Dictionary of custom headers to send, such as User-Agent, 21:02:04 If-None-Match, etc. If None, pool headers are used. If provided, 21:02:04 these headers completely replace any pool-specific headers. 21:02:04 21:02:04 :param retries: 21:02:04 Configure the number of retries to allow before raising a 21:02:04 :class:`~urllib3.exceptions.MaxRetryError` exception. 21:02:04 21:02:04 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 21:02:04 :class:`~urllib3.util.retry.Retry` object for fine-grained control 21:02:04 over different types of retries. 21:02:04 Pass an integer number to retry connection errors that many times, 21:02:04 but no other types of errors. Pass zero to never retry. 21:02:04 21:02:04 If ``False``, then retries are disabled and any exception is raised 21:02:04 immediately. Also, instead of raising a MaxRetryError on redirects, 21:02:04 the redirect response will be returned. 21:02:04 21:02:04 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 21:02:04 21:02:04 :param redirect: 21:02:04 If True, automatically handle redirects (status codes 301, 302, 21:02:04 303, 307, 308). Each redirect counts as a retry. Disabling retries 21:02:04 will disable redirect, too. 21:02:04 21:02:04 :param assert_same_host: 21:02:04 If ``True``, will make sure that the host of the pool requests is 21:02:04 consistent else will raise HostChangedError. When ``False``, you can 21:02:04 use the pool on an HTTP proxy and request foreign hosts. 21:02:04 21:02:04 :param timeout: 21:02:04 If specified, overrides the default timeout for this one 21:02:04 request. It may be a float (in seconds) or an instance of 21:02:04 :class:`urllib3.util.Timeout`. 21:02:04 21:02:04 :param pool_timeout: 21:02:04 If set and the pool is set to block=True, then this method will 21:02:04 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 21:02:04 connection is available within the time period. 21:02:04 21:02:04 :param bool preload_content: 21:02:04 If True, the response's body will be preloaded into memory. 21:02:04 21:02:04 :param bool decode_content: 21:02:04 If True, will attempt to decode the body based on the 21:02:04 'content-encoding' header. 21:02:04 21:02:04 :param release_conn: 21:02:04 If False, then the urlopen call will not release the connection 21:02:04 back into the pool once a response is received (but will release if 21:02:04 you read the entire contents of the response such as when 21:02:04 `preload_content=True`). This is useful if you're not preloading 21:02:04 the response's content immediately. You will need to call 21:02:04 ``r.release_conn()`` on the response ``r`` to return the connection 21:02:04 back into the pool. If None, it takes the value of ``preload_content`` 21:02:04 which defaults to ``True``. 21:02:04 21:02:04 :param bool chunked: 21:02:04 If True, urllib3 will send the body using chunked transfer 21:02:04 encoding. Otherwise, urllib3 will send the body using the standard 21:02:04 content-length form. Defaults to False. 21:02:04 21:02:04 :param int body_pos: 21:02:04 Position to seek to in file-like body in the event of a retry or 21:02:04 redirect. Typically this won't need to be set because urllib3 will 21:02:04 auto-populate the value when needed. 21:02:04 """ 21:02:04 parsed_url = parse_url(url) 21:02:04 destination_scheme = parsed_url.scheme 21:02:04 21:02:04 if headers is None: 21:02:04 headers = self.headers 21:02:04 21:02:04 if not isinstance(retries, Retry): 21:02:04 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 21:02:04 21:02:04 if release_conn is None: 21:02:04 release_conn = preload_content 21:02:04 21:02:04 # Check host 21:02:04 if assert_same_host and not self.is_same_host(url): 21:02:04 raise HostChangedError(self, url, retries) 21:02:04 21:02:04 # Ensure that the URL we're connecting to is properly encoded 21:02:04 if url.startswith("/"): 21:02:04 url = to_str(_encode_target(url)) 21:02:04 else: 21:02:04 url = to_str(parsed_url.url) 21:02:04 21:02:04 conn = None 21:02:04 21:02:04 # Track whether `conn` needs to be released before 21:02:04 # returning/raising/recursing. Update this variable if necessary, and 21:02:04 # leave `release_conn` constant throughout the function. That way, if 21:02:04 # the function recurses, the original value of `release_conn` will be 21:02:04 # passed down into the recursive call, and its value will be respected. 21:02:04 # 21:02:04 # See issue #651 [1] for details. 21:02:04 # 21:02:04 # [1] 21:02:04 release_this_conn = release_conn 21:02:04 21:02:04 http_tunnel_required = connection_requires_http_tunnel( 21:02:04 self.proxy, self.proxy_config, destination_scheme 21:02:04 ) 21:02:04 21:02:04 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 21:02:04 # have to copy the headers dict so we can safely change it without those 21:02:04 # changes being reflected in anyone else's copy. 21:02:04 if not http_tunnel_required: 21:02:04 headers = headers.copy() # type: ignore[attr-defined] 21:02:04 headers.update(self.proxy_headers) # type: ignore[union-attr] 21:02:04 21:02:04 # Must keep the exception bound to a separate variable or else Python 3 21:02:04 # complains about UnboundLocalError. 21:02:04 err = None 21:02:04 21:02:04 # Keep track of whether we cleanly exited the except block. This 21:02:04 # ensures we do proper cleanup in finally. 21:02:04 clean_exit = False 21:02:04 21:02:04 # Rewind body position, if needed. Record current position 21:02:04 # for future rewinds in the event of a redirect/retry. 21:02:04 body_pos = set_file_position(body, body_pos) 21:02:04 21:02:04 try: 21:02:04 # Request a connection from the queue. 21:02:04 timeout_obj = self._get_timeout(timeout) 21:02:04 conn = self._get_conn(timeout=pool_timeout) 21:02:04 21:02:04 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 21:02:04 21:02:04 # Is this a closed/new connection that requires CONNECT tunnelling? 21:02:04 if self.proxy is not None and http_tunnel_required and conn.is_closed: 21:02:04 try: 21:02:04 self._prepare_proxy(conn) 21:02:04 except (BaseSSLError, OSError, SocketTimeout) as e: 21:02:04 self._raise_timeout( 21:02:04 err=e, url=self.proxy.url, timeout_value=conn.timeout 21:02:04 ) 21:02:04 raise 21:02:04 21:02:04 # If we're going to release the connection in ``finally:``, then 21:02:04 # the response doesn't need to know about the connection. Otherwise 21:02:04 # it will also try to release it and we'll have a double-release 21:02:04 # mess. 21:02:04 response_conn = conn if not release_conn else None 21:02:04 21:02:04 # Make the request on the HTTPConnection object 21:02:04 > response = self._make_request( 21:02:04 conn, 21:02:04 method, 21:02:04 url, 21:02:04 timeout=timeout_obj, 21:02:04 body=body, 21:02:04 headers=headers, 21:02:04 chunked=chunked, 21:02:04 retries=retries, 21:02:04 response_conn=response_conn, 21:02:04 preload_content=preload_content, 21:02:04 decode_content=decode_content, 21:02:04 **response_kw, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 21:02:04 conn.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 21:02:04 self.endheaders() 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 21:02:04 self._send_output(message_body, encode_chunked=encode_chunked) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 21:02:04 self.send(msg) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 21:02:04 self.connect() 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 21:02:04 self.sock = self._new_conn() 21:02:04 ^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 except socket.gaierror as e: 21:02:04 raise NameResolutionError(self.host, self, e) from e 21:02:04 except SocketTimeout as e: 21:02:04 raise ConnectTimeoutError( 21:02:04 self, 21:02:04 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 21:02:04 ) from e 21:02:04 21:02:04 except OSError as e: 21:02:04 > raise NewConnectionError( 21:02:04 self, f"Failed to establish a new connection: {e}" 21:02:04 ) from e 21:02:04 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 > resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 21:02:04 retries = retries.increment( 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info' 21:02:04 response = None 21:02:04 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 21:02:04 _pool = 21:02:04 _stacktrace = 21:02:04 21:02:04 def increment( 21:02:04 self, 21:02:04 method: str | None = None, 21:02:04 url: str | None = None, 21:02:04 response: BaseHTTPResponse | None = None, 21:02:04 error: Exception | None = None, 21:02:04 _pool: ConnectionPool | None = None, 21:02:04 _stacktrace: TracebackType | None = None, 21:02:04 ) -> Self: 21:02:04 """Return a new Retry object with incremented retry counters. 21:02:04 21:02:04 :param response: A response object, or None, if the server did not 21:02:04 return a response. 21:02:04 :type response: :class:`~urllib3.response.BaseHTTPResponse` 21:02:04 :param Exception error: An error encountered during the request, or 21:02:04 None if the response was received successfully. 21:02:04 21:02:04 :return: A new ``Retry`` object. 21:02:04 """ 21:02:04 if self.total is False and error: 21:02:04 # Disabled, indicate to re-raise the error. 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 21:02:04 total = self.total 21:02:04 if total is not None: 21:02:04 total -= 1 21:02:04 21:02:04 connect = self.connect 21:02:04 read = self.read 21:02:04 redirect = self.redirect 21:02:04 status_count = self.status 21:02:04 other = self.other 21:02:04 cause = "unknown" 21:02:04 status = None 21:02:04 redirect_location = None 21:02:04 21:02:04 if error and self._is_connection_error(error): 21:02:04 # Connect retry? 21:02:04 if connect is False: 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif connect is not None: 21:02:04 connect -= 1 21:02:04 21:02:04 elif error and self._is_read_error(error): 21:02:04 # Read retry? 21:02:04 if read is False or method is None or not self._is_method_retryable(method): 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif read is not None: 21:02:04 read -= 1 21:02:04 21:02:04 elif error: 21:02:04 # Other retry? 21:02:04 if other is not None: 21:02:04 other -= 1 21:02:04 21:02:04 elif response and response.get_redirect_location(): 21:02:04 # Redirect retry? 21:02:04 if redirect is not None: 21:02:04 redirect -= 1 21:02:04 cause = "too many redirects" 21:02:04 response_redirect_location = response.get_redirect_location() 21:02:04 if response_redirect_location: 21:02:04 redirect_location = response_redirect_location 21:02:04 status = response.status 21:02:04 21:02:04 else: 21:02:04 # Incrementing because of a server error like a 500 in 21:02:04 # status_forcelist and the given method is in the allowed_methods 21:02:04 cause = ResponseError.GENERIC_ERROR 21:02:04 if response and response.status: 21:02:04 if status_count is not None: 21:02:04 status_count -= 1 21:02:04 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 21:02:04 status = response.status 21:02:04 21:02:04 history = self.history + ( 21:02:04 RequestHistory(method, url, error, status, redirect_location), 21:02:04 ) 21:02:04 21:02:04 new_retry = self.new( 21:02:04 total=total, 21:02:04 connect=connect, 21:02:04 read=read, 21:02:04 redirect=redirect, 21:02:04 status=status_count, 21:02:04 other=other, 21:02:04 history=history, 21:02:04 ) 21:02:04 21:02:04 if new_retry.is_exhausted(): 21:02:04 reason = error or ResponseError(cause) 21:02:04 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 21:02:04 21:02:04 During handling of the above exception, another exception occurred: 21:02:04 21:02:04 self = 21:02:04 21:02:04 def test_09_xpdr_portmapping_info(self): 21:02:04 > response = test_utils.get_portmapping_node_attr("XPDRA01", "node-info", None) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 21:02:04 transportpce_tests/1.2.1/test01_portmapping.py:110: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 21:02:04 response = get_request(target_url) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 transportpce_tests/common/test_utils.py:117: in get_request 21:02:04 return requests.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 21:02:04 return session.request(method=method, url=url, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 21:02:04 resp = self.send(prep, **send_kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 21:02:04 r = adapter.send(request, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 except (ProtocolError, OSError) as err: 21:02:04 raise ConnectionError(err, request=request) 21:02:04 21:02:04 except MaxRetryError as e: 21:02:04 if isinstance(e.reason, ConnectTimeoutError): 21:02:04 # TODO: Remove this in 3.0.0: see #2811 21:02:04 if not isinstance(e.reason, NewConnectionError): 21:02:04 raise ConnectTimeout(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, ResponseError): 21:02:04 raise RetryError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _ProxyError): 21:02:04 raise ProxyError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _SSLError): 21:02:04 # This branch is for urllib3 v1.22 and later. 21:02:04 raise SSLError(e, request=request) 21:02:04 21:02:04 > raise ConnectionError(e, request=request) 21:02:04 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 21:02:04 ----------------------------- Captured stdout call ----------------------------- 21:02:04 execution of test_09_xpdr_portmapping_info 21:02:04 ________ TestTransportPCEPortmapping.test_10_xpdr_portmapping_NETWORK1 _________ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 > sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 21:02:04 raise err 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 address = ('localhost', 8191), timeout = 30, source_address = None 21:02:04 socket_options = [(6, 1, 1)] 21:02:04 21:02:04 def create_connection( 21:02:04 address: tuple[str, int], 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 source_address: tuple[str, int] | None = None, 21:02:04 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 21:02:04 ) -> socket.socket: 21:02:04 """Connect to *address* and return the socket object. 21:02:04 21:02:04 Convenience function. Connect to *address* (a 2-tuple ``(host, 21:02:04 port)``) and return the socket object. Passing the optional 21:02:04 *timeout* parameter will set the timeout on the socket instance 21:02:04 before attempting to connect. If no *timeout* is supplied, the 21:02:04 global default timeout setting returned by :func:`socket.getdefaulttimeout` 21:02:04 is used. If *source_address* is set it must be a tuple of (host, port) 21:02:04 for the socket to bind as a source address before making the connection. 21:02:04 An host of '' or port 0 tells the OS to use the default. 21:02:04 """ 21:02:04 21:02:04 host, port = address 21:02:04 if host.startswith("["): 21:02:04 host = host.strip("[]") 21:02:04 err = None 21:02:04 21:02:04 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 21:02:04 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 21:02:04 # The original create_connection function always returns all records. 21:02:04 family = allowed_gai_family() 21:02:04 21:02:04 try: 21:02:04 host.encode("idna") 21:02:04 except UnicodeError: 21:02:04 raise LocationParseError(f"'{host}', label empty or too long") from None 21:02:04 21:02:04 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 21:02:04 af, socktype, proto, canonname, sa = res 21:02:04 sock = None 21:02:04 try: 21:02:04 sock = socket.socket(af, socktype, proto) 21:02:04 21:02:04 # If provided, set socket level options before connecting. 21:02:04 _set_socket_options(sock, socket_options) 21:02:04 21:02:04 if timeout is not _DEFAULT_TIMEOUT: 21:02:04 sock.settimeout(timeout) 21:02:04 if source_address: 21:02:04 sock.bind(source_address) 21:02:04 > sock.connect(sa) 21:02:04 E ConnectionRefusedError: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK1' 21:02:04 body = None 21:02:04 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 21:02:04 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 redirect = False, assert_same_host = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 21:02:04 release_conn = False, chunked = False, body_pos = None, preload_content = False 21:02:04 decode_content = False, response_kw = {} 21:02:04 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK1', query=None, fragment=None) 21:02:04 destination_scheme = None, conn = None, release_this_conn = True 21:02:04 http_tunnel_required = False, err = None, clean_exit = False 21:02:04 21:02:04 def urlopen( # type: ignore[override] 21:02:04 self, 21:02:04 method: str, 21:02:04 url: str, 21:02:04 body: _TYPE_BODY | None = None, 21:02:04 headers: typing.Mapping[str, str] | None = None, 21:02:04 retries: Retry | bool | int | None = None, 21:02:04 redirect: bool = True, 21:02:04 assert_same_host: bool = True, 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 pool_timeout: int | None = None, 21:02:04 release_conn: bool | None = None, 21:02:04 chunked: bool = False, 21:02:04 body_pos: _TYPE_BODY_POSITION | None = None, 21:02:04 preload_content: bool = True, 21:02:04 decode_content: bool = True, 21:02:04 **response_kw: typing.Any, 21:02:04 ) -> BaseHTTPResponse: 21:02:04 """ 21:02:04 Get a connection from the pool and perform an HTTP request. This is the 21:02:04 lowest level call for making a request, so you'll need to specify all 21:02:04 the raw details. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 More commonly, it's appropriate to use a convenience method 21:02:04 such as :meth:`request`. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 `release_conn` will only behave as expected if 21:02:04 `preload_content=False` because we want to make 21:02:04 `preload_content=False` the default behaviour someday soon without 21:02:04 breaking backwards compatibility. 21:02:04 21:02:04 :param method: 21:02:04 HTTP request method (such as GET, POST, PUT, etc.) 21:02:04 21:02:04 :param url: 21:02:04 The URL to perform the request on. 21:02:04 21:02:04 :param body: 21:02:04 Data to send in the request body, either :class:`str`, :class:`bytes`, 21:02:04 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 21:02:04 21:02:04 :param headers: 21:02:04 Dictionary of custom headers to send, such as User-Agent, 21:02:04 If-None-Match, etc. If None, pool headers are used. If provided, 21:02:04 these headers completely replace any pool-specific headers. 21:02:04 21:02:04 :param retries: 21:02:04 Configure the number of retries to allow before raising a 21:02:04 :class:`~urllib3.exceptions.MaxRetryError` exception. 21:02:04 21:02:04 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 21:02:04 :class:`~urllib3.util.retry.Retry` object for fine-grained control 21:02:04 over different types of retries. 21:02:04 Pass an integer number to retry connection errors that many times, 21:02:04 but no other types of errors. Pass zero to never retry. 21:02:04 21:02:04 If ``False``, then retries are disabled and any exception is raised 21:02:04 immediately. Also, instead of raising a MaxRetryError on redirects, 21:02:04 the redirect response will be returned. 21:02:04 21:02:04 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 21:02:04 21:02:04 :param redirect: 21:02:04 If True, automatically handle redirects (status codes 301, 302, 21:02:04 303, 307, 308). Each redirect counts as a retry. Disabling retries 21:02:04 will disable redirect, too. 21:02:04 21:02:04 :param assert_same_host: 21:02:04 If ``True``, will make sure that the host of the pool requests is 21:02:04 consistent else will raise HostChangedError. When ``False``, you can 21:02:04 use the pool on an HTTP proxy and request foreign hosts. 21:02:04 21:02:04 :param timeout: 21:02:04 If specified, overrides the default timeout for this one 21:02:04 request. It may be a float (in seconds) or an instance of 21:02:04 :class:`urllib3.util.Timeout`. 21:02:04 21:02:04 :param pool_timeout: 21:02:04 If set and the pool is set to block=True, then this method will 21:02:04 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 21:02:04 connection is available within the time period. 21:02:04 21:02:04 :param bool preload_content: 21:02:04 If True, the response's body will be preloaded into memory. 21:02:04 21:02:04 :param bool decode_content: 21:02:04 If True, will attempt to decode the body based on the 21:02:04 'content-encoding' header. 21:02:04 21:02:04 :param release_conn: 21:02:04 If False, then the urlopen call will not release the connection 21:02:04 back into the pool once a response is received (but will release if 21:02:04 you read the entire contents of the response such as when 21:02:04 `preload_content=True`). This is useful if you're not preloading 21:02:04 the response's content immediately. You will need to call 21:02:04 ``r.release_conn()`` on the response ``r`` to return the connection 21:02:04 back into the pool. If None, it takes the value of ``preload_content`` 21:02:04 which defaults to ``True``. 21:02:04 21:02:04 :param bool chunked: 21:02:04 If True, urllib3 will send the body using chunked transfer 21:02:04 encoding. Otherwise, urllib3 will send the body using the standard 21:02:04 content-length form. Defaults to False. 21:02:04 21:02:04 :param int body_pos: 21:02:04 Position to seek to in file-like body in the event of a retry or 21:02:04 redirect. Typically this won't need to be set because urllib3 will 21:02:04 auto-populate the value when needed. 21:02:04 """ 21:02:04 parsed_url = parse_url(url) 21:02:04 destination_scheme = parsed_url.scheme 21:02:04 21:02:04 if headers is None: 21:02:04 headers = self.headers 21:02:04 21:02:04 if not isinstance(retries, Retry): 21:02:04 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 21:02:04 21:02:04 if release_conn is None: 21:02:04 release_conn = preload_content 21:02:04 21:02:04 # Check host 21:02:04 if assert_same_host and not self.is_same_host(url): 21:02:04 raise HostChangedError(self, url, retries) 21:02:04 21:02:04 # Ensure that the URL we're connecting to is properly encoded 21:02:04 if url.startswith("/"): 21:02:04 url = to_str(_encode_target(url)) 21:02:04 else: 21:02:04 url = to_str(parsed_url.url) 21:02:04 21:02:04 conn = None 21:02:04 21:02:04 # Track whether `conn` needs to be released before 21:02:04 # returning/raising/recursing. Update this variable if necessary, and 21:02:04 # leave `release_conn` constant throughout the function. That way, if 21:02:04 # the function recurses, the original value of `release_conn` will be 21:02:04 # passed down into the recursive call, and its value will be respected. 21:02:04 # 21:02:04 # See issue #651 [1] for details. 21:02:04 # 21:02:04 # [1] 21:02:04 release_this_conn = release_conn 21:02:04 21:02:04 http_tunnel_required = connection_requires_http_tunnel( 21:02:04 self.proxy, self.proxy_config, destination_scheme 21:02:04 ) 21:02:04 21:02:04 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 21:02:04 # have to copy the headers dict so we can safely change it without those 21:02:04 # changes being reflected in anyone else's copy. 21:02:04 if not http_tunnel_required: 21:02:04 headers = headers.copy() # type: ignore[attr-defined] 21:02:04 headers.update(self.proxy_headers) # type: ignore[union-attr] 21:02:04 21:02:04 # Must keep the exception bound to a separate variable or else Python 3 21:02:04 # complains about UnboundLocalError. 21:02:04 err = None 21:02:04 21:02:04 # Keep track of whether we cleanly exited the except block. This 21:02:04 # ensures we do proper cleanup in finally. 21:02:04 clean_exit = False 21:02:04 21:02:04 # Rewind body position, if needed. Record current position 21:02:04 # for future rewinds in the event of a redirect/retry. 21:02:04 body_pos = set_file_position(body, body_pos) 21:02:04 21:02:04 try: 21:02:04 # Request a connection from the queue. 21:02:04 timeout_obj = self._get_timeout(timeout) 21:02:04 conn = self._get_conn(timeout=pool_timeout) 21:02:04 21:02:04 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 21:02:04 21:02:04 # Is this a closed/new connection that requires CONNECT tunnelling? 21:02:04 if self.proxy is not None and http_tunnel_required and conn.is_closed: 21:02:04 try: 21:02:04 self._prepare_proxy(conn) 21:02:04 except (BaseSSLError, OSError, SocketTimeout) as e: 21:02:04 self._raise_timeout( 21:02:04 err=e, url=self.proxy.url, timeout_value=conn.timeout 21:02:04 ) 21:02:04 raise 21:02:04 21:02:04 # If we're going to release the connection in ``finally:``, then 21:02:04 # the response doesn't need to know about the connection. Otherwise 21:02:04 # it will also try to release it and we'll have a double-release 21:02:04 # mess. 21:02:04 response_conn = conn if not release_conn else None 21:02:04 21:02:04 # Make the request on the HTTPConnection object 21:02:04 > response = self._make_request( 21:02:04 conn, 21:02:04 method, 21:02:04 url, 21:02:04 timeout=timeout_obj, 21:02:04 body=body, 21:02:04 headers=headers, 21:02:04 chunked=chunked, 21:02:04 retries=retries, 21:02:04 response_conn=response_conn, 21:02:04 preload_content=preload_content, 21:02:04 decode_content=decode_content, 21:02:04 **response_kw, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 21:02:04 conn.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 21:02:04 self.endheaders() 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 21:02:04 self._send_output(message_body, encode_chunked=encode_chunked) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 21:02:04 self.send(msg) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 21:02:04 self.connect() 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 21:02:04 self.sock = self._new_conn() 21:02:04 ^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 except socket.gaierror as e: 21:02:04 raise NameResolutionError(self.host, self, e) from e 21:02:04 except SocketTimeout as e: 21:02:04 raise ConnectTimeoutError( 21:02:04 self, 21:02:04 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 21:02:04 ) from e 21:02:04 21:02:04 except OSError as e: 21:02:04 > raise NewConnectionError( 21:02:04 self, f"Failed to establish a new connection: {e}" 21:02:04 ) from e 21:02:04 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 > resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 21:02:04 retries = retries.increment( 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK1' 21:02:04 response = None 21:02:04 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 21:02:04 _pool = 21:02:04 _stacktrace = 21:02:04 21:02:04 def increment( 21:02:04 self, 21:02:04 method: str | None = None, 21:02:04 url: str | None = None, 21:02:04 response: BaseHTTPResponse | None = None, 21:02:04 error: Exception | None = None, 21:02:04 _pool: ConnectionPool | None = None, 21:02:04 _stacktrace: TracebackType | None = None, 21:02:04 ) -> Self: 21:02:04 """Return a new Retry object with incremented retry counters. 21:02:04 21:02:04 :param response: A response object, or None, if the server did not 21:02:04 return a response. 21:02:04 :type response: :class:`~urllib3.response.BaseHTTPResponse` 21:02:04 :param Exception error: An error encountered during the request, or 21:02:04 None if the response was received successfully. 21:02:04 21:02:04 :return: A new ``Retry`` object. 21:02:04 """ 21:02:04 if self.total is False and error: 21:02:04 # Disabled, indicate to re-raise the error. 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 21:02:04 total = self.total 21:02:04 if total is not None: 21:02:04 total -= 1 21:02:04 21:02:04 connect = self.connect 21:02:04 read = self.read 21:02:04 redirect = self.redirect 21:02:04 status_count = self.status 21:02:04 other = self.other 21:02:04 cause = "unknown" 21:02:04 status = None 21:02:04 redirect_location = None 21:02:04 21:02:04 if error and self._is_connection_error(error): 21:02:04 # Connect retry? 21:02:04 if connect is False: 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif connect is not None: 21:02:04 connect -= 1 21:02:04 21:02:04 elif error and self._is_read_error(error): 21:02:04 # Read retry? 21:02:04 if read is False or method is None or not self._is_method_retryable(method): 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif read is not None: 21:02:04 read -= 1 21:02:04 21:02:04 elif error: 21:02:04 # Other retry? 21:02:04 if other is not None: 21:02:04 other -= 1 21:02:04 21:02:04 elif response and response.get_redirect_location(): 21:02:04 # Redirect retry? 21:02:04 if redirect is not None: 21:02:04 redirect -= 1 21:02:04 cause = "too many redirects" 21:02:04 response_redirect_location = response.get_redirect_location() 21:02:04 if response_redirect_location: 21:02:04 redirect_location = response_redirect_location 21:02:04 status = response.status 21:02:04 21:02:04 else: 21:02:04 # Incrementing because of a server error like a 500 in 21:02:04 # status_forcelist and the given method is in the allowed_methods 21:02:04 cause = ResponseError.GENERIC_ERROR 21:02:04 if response and response.status: 21:02:04 if status_count is not None: 21:02:04 status_count -= 1 21:02:04 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 21:02:04 status = response.status 21:02:04 21:02:04 history = self.history + ( 21:02:04 RequestHistory(method, url, error, status, redirect_location), 21:02:04 ) 21:02:04 21:02:04 new_retry = self.new( 21:02:04 total=total, 21:02:04 connect=connect, 21:02:04 read=read, 21:02:04 redirect=redirect, 21:02:04 status=status_count, 21:02:04 other=other, 21:02:04 history=history, 21:02:04 ) 21:02:04 21:02:04 if new_retry.is_exhausted(): 21:02:04 reason = error or ResponseError(cause) 21:02:04 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK1 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 21:02:04 21:02:04 During handling of the above exception, another exception occurred: 21:02:04 21:02:04 self = 21:02:04 21:02:04 def test_10_xpdr_portmapping_NETWORK1(self): 21:02:04 > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-NETWORK1") 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 21:02:04 transportpce_tests/1.2.1/test01_portmapping.py:123: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 21:02:04 response = get_request(target_url) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 transportpce_tests/common/test_utils.py:117: in get_request 21:02:04 return requests.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 21:02:04 return session.request(method=method, url=url, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 21:02:04 resp = self.send(prep, **send_kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 21:02:04 r = adapter.send(request, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 except (ProtocolError, OSError) as err: 21:02:04 raise ConnectionError(err, request=request) 21:02:04 21:02:04 except MaxRetryError as e: 21:02:04 if isinstance(e.reason, ConnectTimeoutError): 21:02:04 # TODO: Remove this in 3.0.0: see #2811 21:02:04 if not isinstance(e.reason, NewConnectionError): 21:02:04 raise ConnectTimeout(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, ResponseError): 21:02:04 raise RetryError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _ProxyError): 21:02:04 raise ProxyError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _SSLError): 21:02:04 # This branch is for urllib3 v1.22 and later. 21:02:04 raise SSLError(e, request=request) 21:02:04 21:02:04 > raise ConnectionError(e, request=request) 21:02:04 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK1 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 21:02:04 ----------------------------- Captured stdout call ----------------------------- 21:02:04 execution of test_10_xpdr_portmapping_NETWORK1 21:02:04 ________ TestTransportPCEPortmapping.test_11_xpdr_portmapping_NETWORK2 _________ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 > sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 21:02:04 raise err 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 address = ('localhost', 8191), timeout = 30, source_address = None 21:02:04 socket_options = [(6, 1, 1)] 21:02:04 21:02:04 def create_connection( 21:02:04 address: tuple[str, int], 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 source_address: tuple[str, int] | None = None, 21:02:04 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 21:02:04 ) -> socket.socket: 21:02:04 """Connect to *address* and return the socket object. 21:02:04 21:02:04 Convenience function. Connect to *address* (a 2-tuple ``(host, 21:02:04 port)``) and return the socket object. Passing the optional 21:02:04 *timeout* parameter will set the timeout on the socket instance 21:02:04 before attempting to connect. If no *timeout* is supplied, the 21:02:04 global default timeout setting returned by :func:`socket.getdefaulttimeout` 21:02:04 is used. If *source_address* is set it must be a tuple of (host, port) 21:02:04 for the socket to bind as a source address before making the connection. 21:02:04 An host of '' or port 0 tells the OS to use the default. 21:02:04 """ 21:02:04 21:02:04 host, port = address 21:02:04 if host.startswith("["): 21:02:04 host = host.strip("[]") 21:02:04 err = None 21:02:04 21:02:04 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 21:02:04 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 21:02:04 # The original create_connection function always returns all records. 21:02:04 family = allowed_gai_family() 21:02:04 21:02:04 try: 21:02:04 host.encode("idna") 21:02:04 except UnicodeError: 21:02:04 raise LocationParseError(f"'{host}', label empty or too long") from None 21:02:04 21:02:04 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 21:02:04 af, socktype, proto, canonname, sa = res 21:02:04 sock = None 21:02:04 try: 21:02:04 sock = socket.socket(af, socktype, proto) 21:02:04 21:02:04 # If provided, set socket level options before connecting. 21:02:04 _set_socket_options(sock, socket_options) 21:02:04 21:02:04 if timeout is not _DEFAULT_TIMEOUT: 21:02:04 sock.settimeout(timeout) 21:02:04 if source_address: 21:02:04 sock.bind(source_address) 21:02:04 > sock.connect(sa) 21:02:04 E ConnectionRefusedError: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2' 21:02:04 body = None 21:02:04 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 21:02:04 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 redirect = False, assert_same_host = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 21:02:04 release_conn = False, chunked = False, body_pos = None, preload_content = False 21:02:04 decode_content = False, response_kw = {} 21:02:04 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2', query=None, fragment=None) 21:02:04 destination_scheme = None, conn = None, release_this_conn = True 21:02:04 http_tunnel_required = False, err = None, clean_exit = False 21:02:04 21:02:04 def urlopen( # type: ignore[override] 21:02:04 self, 21:02:04 method: str, 21:02:04 url: str, 21:02:04 body: _TYPE_BODY | None = None, 21:02:04 headers: typing.Mapping[str, str] | None = None, 21:02:04 retries: Retry | bool | int | None = None, 21:02:04 redirect: bool = True, 21:02:04 assert_same_host: bool = True, 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 pool_timeout: int | None = None, 21:02:04 release_conn: bool | None = None, 21:02:04 chunked: bool = False, 21:02:04 body_pos: _TYPE_BODY_POSITION | None = None, 21:02:04 preload_content: bool = True, 21:02:04 decode_content: bool = True, 21:02:04 **response_kw: typing.Any, 21:02:04 ) -> BaseHTTPResponse: 21:02:04 """ 21:02:04 Get a connection from the pool and perform an HTTP request. This is the 21:02:04 lowest level call for making a request, so you'll need to specify all 21:02:04 the raw details. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 More commonly, it's appropriate to use a convenience method 21:02:04 such as :meth:`request`. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 `release_conn` will only behave as expected if 21:02:04 `preload_content=False` because we want to make 21:02:04 `preload_content=False` the default behaviour someday soon without 21:02:04 breaking backwards compatibility. 21:02:04 21:02:04 :param method: 21:02:04 HTTP request method (such as GET, POST, PUT, etc.) 21:02:04 21:02:04 :param url: 21:02:04 The URL to perform the request on. 21:02:04 21:02:04 :param body: 21:02:04 Data to send in the request body, either :class:`str`, :class:`bytes`, 21:02:04 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 21:02:04 21:02:04 :param headers: 21:02:04 Dictionary of custom headers to send, such as User-Agent, 21:02:04 If-None-Match, etc. If None, pool headers are used. If provided, 21:02:04 these headers completely replace any pool-specific headers. 21:02:04 21:02:04 :param retries: 21:02:04 Configure the number of retries to allow before raising a 21:02:04 :class:`~urllib3.exceptions.MaxRetryError` exception. 21:02:04 21:02:04 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 21:02:04 :class:`~urllib3.util.retry.Retry` object for fine-grained control 21:02:04 over different types of retries. 21:02:04 Pass an integer number to retry connection errors that many times, 21:02:04 but no other types of errors. Pass zero to never retry. 21:02:04 21:02:04 If ``False``, then retries are disabled and any exception is raised 21:02:04 immediately. Also, instead of raising a MaxRetryError on redirects, 21:02:04 the redirect response will be returned. 21:02:04 21:02:04 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 21:02:04 21:02:04 :param redirect: 21:02:04 If True, automatically handle redirects (status codes 301, 302, 21:02:04 303, 307, 308). Each redirect counts as a retry. Disabling retries 21:02:04 will disable redirect, too. 21:02:04 21:02:04 :param assert_same_host: 21:02:04 If ``True``, will make sure that the host of the pool requests is 21:02:04 consistent else will raise HostChangedError. When ``False``, you can 21:02:04 use the pool on an HTTP proxy and request foreign hosts. 21:02:04 21:02:04 :param timeout: 21:02:04 If specified, overrides the default timeout for this one 21:02:04 request. It may be a float (in seconds) or an instance of 21:02:04 :class:`urllib3.util.Timeout`. 21:02:04 21:02:04 :param pool_timeout: 21:02:04 If set and the pool is set to block=True, then this method will 21:02:04 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 21:02:04 connection is available within the time period. 21:02:04 21:02:04 :param bool preload_content: 21:02:04 If True, the response's body will be preloaded into memory. 21:02:04 21:02:04 :param bool decode_content: 21:02:04 If True, will attempt to decode the body based on the 21:02:04 'content-encoding' header. 21:02:04 21:02:04 :param release_conn: 21:02:04 If False, then the urlopen call will not release the connection 21:02:04 back into the pool once a response is received (but will release if 21:02:04 you read the entire contents of the response such as when 21:02:04 `preload_content=True`). This is useful if you're not preloading 21:02:04 the response's content immediately. You will need to call 21:02:04 ``r.release_conn()`` on the response ``r`` to return the connection 21:02:04 back into the pool. If None, it takes the value of ``preload_content`` 21:02:04 which defaults to ``True``. 21:02:04 21:02:04 :param bool chunked: 21:02:04 If True, urllib3 will send the body using chunked transfer 21:02:04 encoding. Otherwise, urllib3 will send the body using the standard 21:02:04 content-length form. Defaults to False. 21:02:04 21:02:04 :param int body_pos: 21:02:04 Position to seek to in file-like body in the event of a retry or 21:02:04 redirect. Typically this won't need to be set because urllib3 will 21:02:04 auto-populate the value when needed. 21:02:04 """ 21:02:04 parsed_url = parse_url(url) 21:02:04 destination_scheme = parsed_url.scheme 21:02:04 21:02:04 if headers is None: 21:02:04 headers = self.headers 21:02:04 21:02:04 if not isinstance(retries, Retry): 21:02:04 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 21:02:04 21:02:04 if release_conn is None: 21:02:04 release_conn = preload_content 21:02:04 21:02:04 # Check host 21:02:04 if assert_same_host and not self.is_same_host(url): 21:02:04 raise HostChangedError(self, url, retries) 21:02:04 21:02:04 # Ensure that the URL we're connecting to is properly encoded 21:02:04 if url.startswith("/"): 21:02:04 url = to_str(_encode_target(url)) 21:02:04 else: 21:02:04 url = to_str(parsed_url.url) 21:02:04 21:02:04 conn = None 21:02:04 21:02:04 # Track whether `conn` needs to be released before 21:02:04 # returning/raising/recursing. Update this variable if necessary, and 21:02:04 # leave `release_conn` constant throughout the function. That way, if 21:02:04 # the function recurses, the original value of `release_conn` will be 21:02:04 # passed down into the recursive call, and its value will be respected. 21:02:04 # 21:02:04 # See issue #651 [1] for details. 21:02:04 # 21:02:04 # [1] 21:02:04 release_this_conn = release_conn 21:02:04 21:02:04 http_tunnel_required = connection_requires_http_tunnel( 21:02:04 self.proxy, self.proxy_config, destination_scheme 21:02:04 ) 21:02:04 21:02:04 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 21:02:04 # have to copy the headers dict so we can safely change it without those 21:02:04 # changes being reflected in anyone else's copy. 21:02:04 if not http_tunnel_required: 21:02:04 headers = headers.copy() # type: ignore[attr-defined] 21:02:04 headers.update(self.proxy_headers) # type: ignore[union-attr] 21:02:04 21:02:04 # Must keep the exception bound to a separate variable or else Python 3 21:02:04 # complains about UnboundLocalError. 21:02:04 err = None 21:02:04 21:02:04 # Keep track of whether we cleanly exited the except block. This 21:02:04 # ensures we do proper cleanup in finally. 21:02:04 clean_exit = False 21:02:04 21:02:04 # Rewind body position, if needed. Record current position 21:02:04 # for future rewinds in the event of a redirect/retry. 21:02:04 body_pos = set_file_position(body, body_pos) 21:02:04 21:02:04 try: 21:02:04 # Request a connection from the queue. 21:02:04 timeout_obj = self._get_timeout(timeout) 21:02:04 conn = self._get_conn(timeout=pool_timeout) 21:02:04 21:02:04 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 21:02:04 21:02:04 # Is this a closed/new connection that requires CONNECT tunnelling? 21:02:04 if self.proxy is not None and http_tunnel_required and conn.is_closed: 21:02:04 try: 21:02:04 self._prepare_proxy(conn) 21:02:04 except (BaseSSLError, OSError, SocketTimeout) as e: 21:02:04 self._raise_timeout( 21:02:04 err=e, url=self.proxy.url, timeout_value=conn.timeout 21:02:04 ) 21:02:04 raise 21:02:04 21:02:04 # If we're going to release the connection in ``finally:``, then 21:02:04 # the response doesn't need to know about the connection. Otherwise 21:02:04 # it will also try to release it and we'll have a double-release 21:02:04 # mess. 21:02:04 response_conn = conn if not release_conn else None 21:02:04 21:02:04 # Make the request on the HTTPConnection object 21:02:04 > response = self._make_request( 21:02:04 conn, 21:02:04 method, 21:02:04 url, 21:02:04 timeout=timeout_obj, 21:02:04 body=body, 21:02:04 headers=headers, 21:02:04 chunked=chunked, 21:02:04 retries=retries, 21:02:04 response_conn=response_conn, 21:02:04 preload_content=preload_content, 21:02:04 decode_content=decode_content, 21:02:04 **response_kw, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 21:02:04 conn.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 21:02:04 self.endheaders() 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 21:02:04 self._send_output(message_body, encode_chunked=encode_chunked) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 21:02:04 self.send(msg) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 21:02:04 self.connect() 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 21:02:04 self.sock = self._new_conn() 21:02:04 ^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 except socket.gaierror as e: 21:02:04 raise NameResolutionError(self.host, self, e) from e 21:02:04 except SocketTimeout as e: 21:02:04 raise ConnectTimeoutError( 21:02:04 self, 21:02:04 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 21:02:04 ) from e 21:02:04 21:02:04 except OSError as e: 21:02:04 > raise NewConnectionError( 21:02:04 self, f"Failed to establish a new connection: {e}" 21:02:04 ) from e 21:02:04 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 > resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 21:02:04 retries = retries.increment( 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2' 21:02:04 response = None 21:02:04 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 21:02:04 _pool = 21:02:04 _stacktrace = 21:02:04 21:02:04 def increment( 21:02:04 self, 21:02:04 method: str | None = None, 21:02:04 url: str | None = None, 21:02:04 response: BaseHTTPResponse | None = None, 21:02:04 error: Exception | None = None, 21:02:04 _pool: ConnectionPool | None = None, 21:02:04 _stacktrace: TracebackType | None = None, 21:02:04 ) -> Self: 21:02:04 """Return a new Retry object with incremented retry counters. 21:02:04 21:02:04 :param response: A response object, or None, if the server did not 21:02:04 return a response. 21:02:04 :type response: :class:`~urllib3.response.BaseHTTPResponse` 21:02:04 :param Exception error: An error encountered during the request, or 21:02:04 None if the response was received successfully. 21:02:04 21:02:04 :return: A new ``Retry`` object. 21:02:04 """ 21:02:04 if self.total is False and error: 21:02:04 # Disabled, indicate to re-raise the error. 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 21:02:04 total = self.total 21:02:04 if total is not None: 21:02:04 total -= 1 21:02:04 21:02:04 connect = self.connect 21:02:04 read = self.read 21:02:04 redirect = self.redirect 21:02:04 status_count = self.status 21:02:04 other = self.other 21:02:04 cause = "unknown" 21:02:04 status = None 21:02:04 redirect_location = None 21:02:04 21:02:04 if error and self._is_connection_error(error): 21:02:04 # Connect retry? 21:02:04 if connect is False: 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif connect is not None: 21:02:04 connect -= 1 21:02:04 21:02:04 elif error and self._is_read_error(error): 21:02:04 # Read retry? 21:02:04 if read is False or method is None or not self._is_method_retryable(method): 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif read is not None: 21:02:04 read -= 1 21:02:04 21:02:04 elif error: 21:02:04 # Other retry? 21:02:04 if other is not None: 21:02:04 other -= 1 21:02:04 21:02:04 elif response and response.get_redirect_location(): 21:02:04 # Redirect retry? 21:02:04 if redirect is not None: 21:02:04 redirect -= 1 21:02:04 cause = "too many redirects" 21:02:04 response_redirect_location = response.get_redirect_location() 21:02:04 if response_redirect_location: 21:02:04 redirect_location = response_redirect_location 21:02:04 status = response.status 21:02:04 21:02:04 else: 21:02:04 # Incrementing because of a server error like a 500 in 21:02:04 # status_forcelist and the given method is in the allowed_methods 21:02:04 cause = ResponseError.GENERIC_ERROR 21:02:04 if response and response.status: 21:02:04 if status_count is not None: 21:02:04 status_count -= 1 21:02:04 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 21:02:04 status = response.status 21:02:04 21:02:04 history = self.history + ( 21:02:04 RequestHistory(method, url, error, status, redirect_location), 21:02:04 ) 21:02:04 21:02:04 new_retry = self.new( 21:02:04 total=total, 21:02:04 connect=connect, 21:02:04 read=read, 21:02:04 redirect=redirect, 21:02:04 status=status_count, 21:02:04 other=other, 21:02:04 history=history, 21:02:04 ) 21:02:04 21:02:04 if new_retry.is_exhausted(): 21:02:04 reason = error or ResponseError(cause) 21:02:04 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 21:02:04 21:02:04 During handling of the above exception, another exception occurred: 21:02:04 21:02:04 self = 21:02:04 21:02:04 def test_11_xpdr_portmapping_NETWORK2(self): 21:02:04 > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-NETWORK2") 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 21:02:04 transportpce_tests/1.2.1/test01_portmapping.py:135: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 21:02:04 response = get_request(target_url) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 transportpce_tests/common/test_utils.py:117: in get_request 21:02:04 return requests.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 21:02:04 return session.request(method=method, url=url, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 21:02:04 resp = self.send(prep, **send_kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 21:02:04 r = adapter.send(request, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 except (ProtocolError, OSError) as err: 21:02:04 raise ConnectionError(err, request=request) 21:02:04 21:02:04 except MaxRetryError as e: 21:02:04 if isinstance(e.reason, ConnectTimeoutError): 21:02:04 # TODO: Remove this in 3.0.0: see #2811 21:02:04 if not isinstance(e.reason, NewConnectionError): 21:02:04 raise ConnectTimeout(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, ResponseError): 21:02:04 raise RetryError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _ProxyError): 21:02:04 raise ProxyError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _SSLError): 21:02:04 # This branch is for urllib3 v1.22 and later. 21:02:04 raise SSLError(e, request=request) 21:02:04 21:02:04 > raise ConnectionError(e, request=request) 21:02:04 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 21:02:04 ----------------------------- Captured stdout call ----------------------------- 21:02:04 execution of test_11_xpdr_portmapping_NETWORK2 21:02:04 _________ TestTransportPCEPortmapping.test_12_xpdr_portmapping_CLIENT1 _________ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 > sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 21:02:04 raise err 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 address = ('localhost', 8191), timeout = 30, source_address = None 21:02:04 socket_options = [(6, 1, 1)] 21:02:04 21:02:04 def create_connection( 21:02:04 address: tuple[str, int], 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 source_address: tuple[str, int] | None = None, 21:02:04 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 21:02:04 ) -> socket.socket: 21:02:04 """Connect to *address* and return the socket object. 21:02:04 21:02:04 Convenience function. Connect to *address* (a 2-tuple ``(host, 21:02:04 port)``) and return the socket object. Passing the optional 21:02:04 *timeout* parameter will set the timeout on the socket instance 21:02:04 before attempting to connect. If no *timeout* is supplied, the 21:02:04 global default timeout setting returned by :func:`socket.getdefaulttimeout` 21:02:04 is used. If *source_address* is set it must be a tuple of (host, port) 21:02:04 for the socket to bind as a source address before making the connection. 21:02:04 An host of '' or port 0 tells the OS to use the default. 21:02:04 """ 21:02:04 21:02:04 host, port = address 21:02:04 if host.startswith("["): 21:02:04 host = host.strip("[]") 21:02:04 err = None 21:02:04 21:02:04 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 21:02:04 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 21:02:04 # The original create_connection function always returns all records. 21:02:04 family = allowed_gai_family() 21:02:04 21:02:04 try: 21:02:04 host.encode("idna") 21:02:04 except UnicodeError: 21:02:04 raise LocationParseError(f"'{host}', label empty or too long") from None 21:02:04 21:02:04 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 21:02:04 af, socktype, proto, canonname, sa = res 21:02:04 sock = None 21:02:04 try: 21:02:04 sock = socket.socket(af, socktype, proto) 21:02:04 21:02:04 # If provided, set socket level options before connecting. 21:02:04 _set_socket_options(sock, socket_options) 21:02:04 21:02:04 if timeout is not _DEFAULT_TIMEOUT: 21:02:04 sock.settimeout(timeout) 21:02:04 if source_address: 21:02:04 sock.bind(source_address) 21:02:04 > sock.connect(sa) 21:02:04 E ConnectionRefusedError: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1' 21:02:04 body = None 21:02:04 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 21:02:04 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 redirect = False, assert_same_host = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 21:02:04 release_conn = False, chunked = False, body_pos = None, preload_content = False 21:02:04 decode_content = False, response_kw = {} 21:02:04 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1', query=None, fragment=None) 21:02:04 destination_scheme = None, conn = None, release_this_conn = True 21:02:04 http_tunnel_required = False, err = None, clean_exit = False 21:02:04 21:02:04 def urlopen( # type: ignore[override] 21:02:04 self, 21:02:04 method: str, 21:02:04 url: str, 21:02:04 body: _TYPE_BODY | None = None, 21:02:04 headers: typing.Mapping[str, str] | None = None, 21:02:04 retries: Retry | bool | int | None = None, 21:02:04 redirect: bool = True, 21:02:04 assert_same_host: bool = True, 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 pool_timeout: int | None = None, 21:02:04 release_conn: bool | None = None, 21:02:04 chunked: bool = False, 21:02:04 body_pos: _TYPE_BODY_POSITION | None = None, 21:02:04 preload_content: bool = True, 21:02:04 decode_content: bool = True, 21:02:04 **response_kw: typing.Any, 21:02:04 ) -> BaseHTTPResponse: 21:02:04 """ 21:02:04 Get a connection from the pool and perform an HTTP request. This is the 21:02:04 lowest level call for making a request, so you'll need to specify all 21:02:04 the raw details. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 More commonly, it's appropriate to use a convenience method 21:02:04 such as :meth:`request`. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 `release_conn` will only behave as expected if 21:02:04 `preload_content=False` because we want to make 21:02:04 `preload_content=False` the default behaviour someday soon without 21:02:04 breaking backwards compatibility. 21:02:04 21:02:04 :param method: 21:02:04 HTTP request method (such as GET, POST, PUT, etc.) 21:02:04 21:02:04 :param url: 21:02:04 The URL to perform the request on. 21:02:04 21:02:04 :param body: 21:02:04 Data to send in the request body, either :class:`str`, :class:`bytes`, 21:02:04 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 21:02:04 21:02:04 :param headers: 21:02:04 Dictionary of custom headers to send, such as User-Agent, 21:02:04 If-None-Match, etc. If None, pool headers are used. If provided, 21:02:04 these headers completely replace any pool-specific headers. 21:02:04 21:02:04 :param retries: 21:02:04 Configure the number of retries to allow before raising a 21:02:04 :class:`~urllib3.exceptions.MaxRetryError` exception. 21:02:04 21:02:04 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 21:02:04 :class:`~urllib3.util.retry.Retry` object for fine-grained control 21:02:04 over different types of retries. 21:02:04 Pass an integer number to retry connection errors that many times, 21:02:04 but no other types of errors. Pass zero to never retry. 21:02:04 21:02:04 If ``False``, then retries are disabled and any exception is raised 21:02:04 immediately. Also, instead of raising a MaxRetryError on redirects, 21:02:04 the redirect response will be returned. 21:02:04 21:02:04 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 21:02:04 21:02:04 :param redirect: 21:02:04 If True, automatically handle redirects (status codes 301, 302, 21:02:04 303, 307, 308). Each redirect counts as a retry. Disabling retries 21:02:04 will disable redirect, too. 21:02:04 21:02:04 :param assert_same_host: 21:02:04 If ``True``, will make sure that the host of the pool requests is 21:02:04 consistent else will raise HostChangedError. When ``False``, you can 21:02:04 use the pool on an HTTP proxy and request foreign hosts. 21:02:04 21:02:04 :param timeout: 21:02:04 If specified, overrides the default timeout for this one 21:02:04 request. It may be a float (in seconds) or an instance of 21:02:04 :class:`urllib3.util.Timeout`. 21:02:04 21:02:04 :param pool_timeout: 21:02:04 If set and the pool is set to block=True, then this method will 21:02:04 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 21:02:04 connection is available within the time period. 21:02:04 21:02:04 :param bool preload_content: 21:02:04 If True, the response's body will be preloaded into memory. 21:02:04 21:02:04 :param bool decode_content: 21:02:04 If True, will attempt to decode the body based on the 21:02:04 'content-encoding' header. 21:02:04 21:02:04 :param release_conn: 21:02:04 If False, then the urlopen call will not release the connection 21:02:04 back into the pool once a response is received (but will release if 21:02:04 you read the entire contents of the response such as when 21:02:04 `preload_content=True`). This is useful if you're not preloading 21:02:04 the response's content immediately. You will need to call 21:02:04 ``r.release_conn()`` on the response ``r`` to return the connection 21:02:04 back into the pool. If None, it takes the value of ``preload_content`` 21:02:04 which defaults to ``True``. 21:02:04 21:02:04 :param bool chunked: 21:02:04 If True, urllib3 will send the body using chunked transfer 21:02:04 encoding. Otherwise, urllib3 will send the body using the standard 21:02:04 content-length form. Defaults to False. 21:02:04 21:02:04 :param int body_pos: 21:02:04 Position to seek to in file-like body in the event of a retry or 21:02:04 redirect. Typically this won't need to be set because urllib3 will 21:02:04 auto-populate the value when needed. 21:02:04 """ 21:02:04 parsed_url = parse_url(url) 21:02:04 destination_scheme = parsed_url.scheme 21:02:04 21:02:04 if headers is None: 21:02:04 headers = self.headers 21:02:04 21:02:04 if not isinstance(retries, Retry): 21:02:04 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 21:02:04 21:02:04 if release_conn is None: 21:02:04 release_conn = preload_content 21:02:04 21:02:04 # Check host 21:02:04 if assert_same_host and not self.is_same_host(url): 21:02:04 raise HostChangedError(self, url, retries) 21:02:04 21:02:04 # Ensure that the URL we're connecting to is properly encoded 21:02:04 if url.startswith("/"): 21:02:04 url = to_str(_encode_target(url)) 21:02:04 else: 21:02:04 url = to_str(parsed_url.url) 21:02:04 21:02:04 conn = None 21:02:04 21:02:04 # Track whether `conn` needs to be released before 21:02:04 # returning/raising/recursing. Update this variable if necessary, and 21:02:04 # leave `release_conn` constant throughout the function. That way, if 21:02:04 # the function recurses, the original value of `release_conn` will be 21:02:04 # passed down into the recursive call, and its value will be respected. 21:02:04 # 21:02:04 # See issue #651 [1] for details. 21:02:04 # 21:02:04 # [1] 21:02:04 release_this_conn = release_conn 21:02:04 21:02:04 http_tunnel_required = connection_requires_http_tunnel( 21:02:04 self.proxy, self.proxy_config, destination_scheme 21:02:04 ) 21:02:04 21:02:04 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 21:02:04 # have to copy the headers dict so we can safely change it without those 21:02:04 # changes being reflected in anyone else's copy. 21:02:04 if not http_tunnel_required: 21:02:04 headers = headers.copy() # type: ignore[attr-defined] 21:02:04 headers.update(self.proxy_headers) # type: ignore[union-attr] 21:02:04 21:02:04 # Must keep the exception bound to a separate variable or else Python 3 21:02:04 # complains about UnboundLocalError. 21:02:04 err = None 21:02:04 21:02:04 # Keep track of whether we cleanly exited the except block. This 21:02:04 # ensures we do proper cleanup in finally. 21:02:04 clean_exit = False 21:02:04 21:02:04 # Rewind body position, if needed. Record current position 21:02:04 # for future rewinds in the event of a redirect/retry. 21:02:04 body_pos = set_file_position(body, body_pos) 21:02:04 21:02:04 try: 21:02:04 # Request a connection from the queue. 21:02:04 timeout_obj = self._get_timeout(timeout) 21:02:04 conn = self._get_conn(timeout=pool_timeout) 21:02:04 21:02:04 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 21:02:04 21:02:04 # Is this a closed/new connection that requires CONNECT tunnelling? 21:02:04 if self.proxy is not None and http_tunnel_required and conn.is_closed: 21:02:04 try: 21:02:04 self._prepare_proxy(conn) 21:02:04 except (BaseSSLError, OSError, SocketTimeout) as e: 21:02:04 self._raise_timeout( 21:02:04 err=e, url=self.proxy.url, timeout_value=conn.timeout 21:02:04 ) 21:02:04 raise 21:02:04 21:02:04 # If we're going to release the connection in ``finally:``, then 21:02:04 # the response doesn't need to know about the connection. Otherwise 21:02:04 # it will also try to release it and we'll have a double-release 21:02:04 # mess. 21:02:04 response_conn = conn if not release_conn else None 21:02:04 21:02:04 # Make the request on the HTTPConnection object 21:02:04 > response = self._make_request( 21:02:04 conn, 21:02:04 method, 21:02:04 url, 21:02:04 timeout=timeout_obj, 21:02:04 body=body, 21:02:04 headers=headers, 21:02:04 chunked=chunked, 21:02:04 retries=retries, 21:02:04 response_conn=response_conn, 21:02:04 preload_content=preload_content, 21:02:04 decode_content=decode_content, 21:02:04 **response_kw, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 21:02:04 conn.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 21:02:04 self.endheaders() 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 21:02:04 self._send_output(message_body, encode_chunked=encode_chunked) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 21:02:04 self.send(msg) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 21:02:04 self.connect() 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 21:02:04 self.sock = self._new_conn() 21:02:04 ^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 except socket.gaierror as e: 21:02:04 raise NameResolutionError(self.host, self, e) from e 21:02:04 except SocketTimeout as e: 21:02:04 raise ConnectTimeoutError( 21:02:04 self, 21:02:04 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 21:02:04 ) from e 21:02:04 21:02:04 except OSError as e: 21:02:04 > raise NewConnectionError( 21:02:04 self, f"Failed to establish a new connection: {e}" 21:02:04 ) from e 21:02:04 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 > resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 21:02:04 retries = retries.increment( 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1' 21:02:04 response = None 21:02:04 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 21:02:04 _pool = 21:02:04 _stacktrace = 21:02:04 21:02:04 def increment( 21:02:04 self, 21:02:04 method: str | None = None, 21:02:04 url: str | None = None, 21:02:04 response: BaseHTTPResponse | None = None, 21:02:04 error: Exception | None = None, 21:02:04 _pool: ConnectionPool | None = None, 21:02:04 _stacktrace: TracebackType | None = None, 21:02:04 ) -> Self: 21:02:04 """Return a new Retry object with incremented retry counters. 21:02:04 21:02:04 :param response: A response object, or None, if the server did not 21:02:04 return a response. 21:02:04 :type response: :class:`~urllib3.response.BaseHTTPResponse` 21:02:04 :param Exception error: An error encountered during the request, or 21:02:04 None if the response was received successfully. 21:02:04 21:02:04 :return: A new ``Retry`` object. 21:02:04 """ 21:02:04 if self.total is False and error: 21:02:04 # Disabled, indicate to re-raise the error. 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 21:02:04 total = self.total 21:02:04 if total is not None: 21:02:04 total -= 1 21:02:04 21:02:04 connect = self.connect 21:02:04 read = self.read 21:02:04 redirect = self.redirect 21:02:04 status_count = self.status 21:02:04 other = self.other 21:02:04 cause = "unknown" 21:02:04 status = None 21:02:04 redirect_location = None 21:02:04 21:02:04 if error and self._is_connection_error(error): 21:02:04 # Connect retry? 21:02:04 if connect is False: 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif connect is not None: 21:02:04 connect -= 1 21:02:04 21:02:04 elif error and self._is_read_error(error): 21:02:04 # Read retry? 21:02:04 if read is False or method is None or not self._is_method_retryable(method): 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif read is not None: 21:02:04 read -= 1 21:02:04 21:02:04 elif error: 21:02:04 # Other retry? 21:02:04 if other is not None: 21:02:04 other -= 1 21:02:04 21:02:04 elif response and response.get_redirect_location(): 21:02:04 # Redirect retry? 21:02:04 if redirect is not None: 21:02:04 redirect -= 1 21:02:04 cause = "too many redirects" 21:02:04 response_redirect_location = response.get_redirect_location() 21:02:04 if response_redirect_location: 21:02:04 redirect_location = response_redirect_location 21:02:04 status = response.status 21:02:04 21:02:04 else: 21:02:04 # Incrementing because of a server error like a 500 in 21:02:04 # status_forcelist and the given method is in the allowed_methods 21:02:04 cause = ResponseError.GENERIC_ERROR 21:02:04 if response and response.status: 21:02:04 if status_count is not None: 21:02:04 status_count -= 1 21:02:04 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 21:02:04 status = response.status 21:02:04 21:02:04 history = self.history + ( 21:02:04 RequestHistory(method, url, error, status, redirect_location), 21:02:04 ) 21:02:04 21:02:04 new_retry = self.new( 21:02:04 total=total, 21:02:04 connect=connect, 21:02:04 read=read, 21:02:04 redirect=redirect, 21:02:04 status=status_count, 21:02:04 other=other, 21:02:04 history=history, 21:02:04 ) 21:02:04 21:02:04 if new_retry.is_exhausted(): 21:02:04 reason = error or ResponseError(cause) 21:02:04 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 21:02:04 21:02:04 During handling of the above exception, another exception occurred: 21:02:04 21:02:04 self = 21:02:04 21:02:04 def test_12_xpdr_portmapping_CLIENT1(self): 21:02:04 > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT1") 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 21:02:04 transportpce_tests/1.2.1/test01_portmapping.py:147: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 21:02:04 response = get_request(target_url) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 transportpce_tests/common/test_utils.py:117: in get_request 21:02:04 return requests.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 21:02:04 return session.request(method=method, url=url, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 21:02:04 resp = self.send(prep, **send_kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 21:02:04 r = adapter.send(request, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 except (ProtocolError, OSError) as err: 21:02:04 raise ConnectionError(err, request=request) 21:02:04 21:02:04 except MaxRetryError as e: 21:02:04 if isinstance(e.reason, ConnectTimeoutError): 21:02:04 # TODO: Remove this in 3.0.0: see #2811 21:02:04 if not isinstance(e.reason, NewConnectionError): 21:02:04 raise ConnectTimeout(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, ResponseError): 21:02:04 raise RetryError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _ProxyError): 21:02:04 raise ProxyError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _SSLError): 21:02:04 # This branch is for urllib3 v1.22 and later. 21:02:04 raise SSLError(e, request=request) 21:02:04 21:02:04 > raise ConnectionError(e, request=request) 21:02:04 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 21:02:04 ----------------------------- Captured stdout call ----------------------------- 21:02:04 execution of test_12_xpdr_portmapping_CLIENT1 21:02:04 _________ TestTransportPCEPortmapping.test_13_xpdr_portmapping_CLIENT2 _________ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 > sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 21:02:04 raise err 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 address = ('localhost', 8191), timeout = 30, source_address = None 21:02:04 socket_options = [(6, 1, 1)] 21:02:04 21:02:04 def create_connection( 21:02:04 address: tuple[str, int], 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 source_address: tuple[str, int] | None = None, 21:02:04 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 21:02:04 ) -> socket.socket: 21:02:04 """Connect to *address* and return the socket object. 21:02:04 21:02:04 Convenience function. Connect to *address* (a 2-tuple ``(host, 21:02:04 port)``) and return the socket object. Passing the optional 21:02:04 *timeout* parameter will set the timeout on the socket instance 21:02:04 before attempting to connect. If no *timeout* is supplied, the 21:02:04 global default timeout setting returned by :func:`socket.getdefaulttimeout` 21:02:04 is used. If *source_address* is set it must be a tuple of (host, port) 21:02:04 for the socket to bind as a source address before making the connection. 21:02:04 An host of '' or port 0 tells the OS to use the default. 21:02:04 """ 21:02:04 21:02:04 host, port = address 21:02:04 if host.startswith("["): 21:02:04 host = host.strip("[]") 21:02:04 err = None 21:02:04 21:02:04 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 21:02:04 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 21:02:04 # The original create_connection function always returns all records. 21:02:04 family = allowed_gai_family() 21:02:04 21:02:04 try: 21:02:04 host.encode("idna") 21:02:04 except UnicodeError: 21:02:04 raise LocationParseError(f"'{host}', label empty or too long") from None 21:02:04 21:02:04 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 21:02:04 af, socktype, proto, canonname, sa = res 21:02:04 sock = None 21:02:04 try: 21:02:04 sock = socket.socket(af, socktype, proto) 21:02:04 21:02:04 # If provided, set socket level options before connecting. 21:02:04 _set_socket_options(sock, socket_options) 21:02:04 21:02:04 if timeout is not _DEFAULT_TIMEOUT: 21:02:04 sock.settimeout(timeout) 21:02:04 if source_address: 21:02:04 sock.bind(source_address) 21:02:04 > sock.connect(sa) 21:02:04 E ConnectionRefusedError: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2' 21:02:04 body = None 21:02:04 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 21:02:04 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 redirect = False, assert_same_host = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 21:02:04 release_conn = False, chunked = False, body_pos = None, preload_content = False 21:02:04 decode_content = False, response_kw = {} 21:02:04 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2', query=None, fragment=None) 21:02:04 destination_scheme = None, conn = None, release_this_conn = True 21:02:04 http_tunnel_required = False, err = None, clean_exit = False 21:02:04 21:02:04 def urlopen( # type: ignore[override] 21:02:04 self, 21:02:04 method: str, 21:02:04 url: str, 21:02:04 body: _TYPE_BODY | None = None, 21:02:04 headers: typing.Mapping[str, str] | None = None, 21:02:04 retries: Retry | bool | int | None = None, 21:02:04 redirect: bool = True, 21:02:04 assert_same_host: bool = True, 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 pool_timeout: int | None = None, 21:02:04 release_conn: bool | None = None, 21:02:04 chunked: bool = False, 21:02:04 body_pos: _TYPE_BODY_POSITION | None = None, 21:02:04 preload_content: bool = True, 21:02:04 decode_content: bool = True, 21:02:04 **response_kw: typing.Any, 21:02:04 ) -> BaseHTTPResponse: 21:02:04 """ 21:02:04 Get a connection from the pool and perform an HTTP request. This is the 21:02:04 lowest level call for making a request, so you'll need to specify all 21:02:04 the raw details. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 More commonly, it's appropriate to use a convenience method 21:02:04 such as :meth:`request`. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 `release_conn` will only behave as expected if 21:02:04 `preload_content=False` because we want to make 21:02:04 `preload_content=False` the default behaviour someday soon without 21:02:04 breaking backwards compatibility. 21:02:04 21:02:04 :param method: 21:02:04 HTTP request method (such as GET, POST, PUT, etc.) 21:02:04 21:02:04 :param url: 21:02:04 The URL to perform the request on. 21:02:04 21:02:04 :param body: 21:02:04 Data to send in the request body, either :class:`str`, :class:`bytes`, 21:02:04 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 21:02:04 21:02:04 :param headers: 21:02:04 Dictionary of custom headers to send, such as User-Agent, 21:02:04 If-None-Match, etc. If None, pool headers are used. If provided, 21:02:04 these headers completely replace any pool-specific headers. 21:02:04 21:02:04 :param retries: 21:02:04 Configure the number of retries to allow before raising a 21:02:04 :class:`~urllib3.exceptions.MaxRetryError` exception. 21:02:04 21:02:04 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 21:02:04 :class:`~urllib3.util.retry.Retry` object for fine-grained control 21:02:04 over different types of retries. 21:02:04 Pass an integer number to retry connection errors that many times, 21:02:04 but no other types of errors. Pass zero to never retry. 21:02:04 21:02:04 If ``False``, then retries are disabled and any exception is raised 21:02:04 immediately. Also, instead of raising a MaxRetryError on redirects, 21:02:04 the redirect response will be returned. 21:02:04 21:02:04 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 21:02:04 21:02:04 :param redirect: 21:02:04 If True, automatically handle redirects (status codes 301, 302, 21:02:04 303, 307, 308). Each redirect counts as a retry. Disabling retries 21:02:04 will disable redirect, too. 21:02:04 21:02:04 :param assert_same_host: 21:02:04 If ``True``, will make sure that the host of the pool requests is 21:02:04 consistent else will raise HostChangedError. When ``False``, you can 21:02:04 use the pool on an HTTP proxy and request foreign hosts. 21:02:04 21:02:04 :param timeout: 21:02:04 If specified, overrides the default timeout for this one 21:02:04 request. It may be a float (in seconds) or an instance of 21:02:04 :class:`urllib3.util.Timeout`. 21:02:04 21:02:04 :param pool_timeout: 21:02:04 If set and the pool is set to block=True, then this method will 21:02:04 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 21:02:04 connection is available within the time period. 21:02:04 21:02:04 :param bool preload_content: 21:02:04 If True, the response's body will be preloaded into memory. 21:02:04 21:02:04 :param bool decode_content: 21:02:04 If True, will attempt to decode the body based on the 21:02:04 'content-encoding' header. 21:02:04 21:02:04 :param release_conn: 21:02:04 If False, then the urlopen call will not release the connection 21:02:04 back into the pool once a response is received (but will release if 21:02:04 you read the entire contents of the response such as when 21:02:04 `preload_content=True`). This is useful if you're not preloading 21:02:04 the response's content immediately. You will need to call 21:02:04 ``r.release_conn()`` on the response ``r`` to return the connection 21:02:04 back into the pool. If None, it takes the value of ``preload_content`` 21:02:04 which defaults to ``True``. 21:02:04 21:02:04 :param bool chunked: 21:02:04 If True, urllib3 will send the body using chunked transfer 21:02:04 encoding. Otherwise, urllib3 will send the body using the standard 21:02:04 content-length form. Defaults to False. 21:02:04 21:02:04 :param int body_pos: 21:02:04 Position to seek to in file-like body in the event of a retry or 21:02:04 redirect. Typically this won't need to be set because urllib3 will 21:02:04 auto-populate the value when needed. 21:02:04 """ 21:02:04 parsed_url = parse_url(url) 21:02:04 destination_scheme = parsed_url.scheme 21:02:04 21:02:04 if headers is None: 21:02:04 headers = self.headers 21:02:04 21:02:04 if not isinstance(retries, Retry): 21:02:04 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 21:02:04 21:02:04 if release_conn is None: 21:02:04 release_conn = preload_content 21:02:04 21:02:04 # Check host 21:02:04 if assert_same_host and not self.is_same_host(url): 21:02:04 raise HostChangedError(self, url, retries) 21:02:04 21:02:04 # Ensure that the URL we're connecting to is properly encoded 21:02:04 if url.startswith("/"): 21:02:04 url = to_str(_encode_target(url)) 21:02:04 else: 21:02:04 url = to_str(parsed_url.url) 21:02:04 21:02:04 conn = None 21:02:04 21:02:04 # Track whether `conn` needs to be released before 21:02:04 # returning/raising/recursing. Update this variable if necessary, and 21:02:04 # leave `release_conn` constant throughout the function. That way, if 21:02:04 # the function recurses, the original value of `release_conn` will be 21:02:04 # passed down into the recursive call, and its value will be respected. 21:02:04 # 21:02:04 # See issue #651 [1] for details. 21:02:04 # 21:02:04 # [1] 21:02:04 release_this_conn = release_conn 21:02:04 21:02:04 http_tunnel_required = connection_requires_http_tunnel( 21:02:04 self.proxy, self.proxy_config, destination_scheme 21:02:04 ) 21:02:04 21:02:04 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 21:02:04 # have to copy the headers dict so we can safely change it without those 21:02:04 # changes being reflected in anyone else's copy. 21:02:04 if not http_tunnel_required: 21:02:04 headers = headers.copy() # type: ignore[attr-defined] 21:02:04 headers.update(self.proxy_headers) # type: ignore[union-attr] 21:02:04 21:02:04 # Must keep the exception bound to a separate variable or else Python 3 21:02:04 # complains about UnboundLocalError. 21:02:04 err = None 21:02:04 21:02:04 # Keep track of whether we cleanly exited the except block. This 21:02:04 # ensures we do proper cleanup in finally. 21:02:04 clean_exit = False 21:02:04 21:02:04 # Rewind body position, if needed. Record current position 21:02:04 # for future rewinds in the event of a redirect/retry. 21:02:04 body_pos = set_file_position(body, body_pos) 21:02:04 21:02:04 try: 21:02:04 # Request a connection from the queue. 21:02:04 timeout_obj = self._get_timeout(timeout) 21:02:04 conn = self._get_conn(timeout=pool_timeout) 21:02:04 21:02:04 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 21:02:04 21:02:04 # Is this a closed/new connection that requires CONNECT tunnelling? 21:02:04 if self.proxy is not None and http_tunnel_required and conn.is_closed: 21:02:04 try: 21:02:04 self._prepare_proxy(conn) 21:02:04 except (BaseSSLError, OSError, SocketTimeout) as e: 21:02:04 self._raise_timeout( 21:02:04 err=e, url=self.proxy.url, timeout_value=conn.timeout 21:02:04 ) 21:02:04 raise 21:02:04 21:02:04 # If we're going to release the connection in ``finally:``, then 21:02:04 # the response doesn't need to know about the connection. Otherwise 21:02:04 # it will also try to release it and we'll have a double-release 21:02:04 # mess. 21:02:04 response_conn = conn if not release_conn else None 21:02:04 21:02:04 # Make the request on the HTTPConnection object 21:02:04 > response = self._make_request( 21:02:04 conn, 21:02:04 method, 21:02:04 url, 21:02:04 timeout=timeout_obj, 21:02:04 body=body, 21:02:04 headers=headers, 21:02:04 chunked=chunked, 21:02:04 retries=retries, 21:02:04 response_conn=response_conn, 21:02:04 preload_content=preload_content, 21:02:04 decode_content=decode_content, 21:02:04 **response_kw, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 21:02:04 conn.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 21:02:04 self.endheaders() 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 21:02:04 self._send_output(message_body, encode_chunked=encode_chunked) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 21:02:04 self.send(msg) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 21:02:04 self.connect() 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 21:02:04 self.sock = self._new_conn() 21:02:04 ^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 except socket.gaierror as e: 21:02:04 raise NameResolutionError(self.host, self, e) from e 21:02:04 except SocketTimeout as e: 21:02:04 raise ConnectTimeoutError( 21:02:04 self, 21:02:04 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 21:02:04 ) from e 21:02:04 21:02:04 except OSError as e: 21:02:04 > raise NewConnectionError( 21:02:04 self, f"Failed to establish a new connection: {e}" 21:02:04 ) from e 21:02:04 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 > resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 21:02:04 retries = retries.increment( 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2' 21:02:04 response = None 21:02:04 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 21:02:04 _pool = 21:02:04 _stacktrace = 21:02:04 21:02:04 def increment( 21:02:04 self, 21:02:04 method: str | None = None, 21:02:04 url: str | None = None, 21:02:04 response: BaseHTTPResponse | None = None, 21:02:04 error: Exception | None = None, 21:02:04 _pool: ConnectionPool | None = None, 21:02:04 _stacktrace: TracebackType | None = None, 21:02:04 ) -> Self: 21:02:04 """Return a new Retry object with incremented retry counters. 21:02:04 21:02:04 :param response: A response object, or None, if the server did not 21:02:04 return a response. 21:02:04 :type response: :class:`~urllib3.response.BaseHTTPResponse` 21:02:04 :param Exception error: An error encountered during the request, or 21:02:04 None if the response was received successfully. 21:02:04 21:02:04 :return: A new ``Retry`` object. 21:02:04 """ 21:02:04 if self.total is False and error: 21:02:04 # Disabled, indicate to re-raise the error. 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 21:02:04 total = self.total 21:02:04 if total is not None: 21:02:04 total -= 1 21:02:04 21:02:04 connect = self.connect 21:02:04 read = self.read 21:02:04 redirect = self.redirect 21:02:04 status_count = self.status 21:02:04 other = self.other 21:02:04 cause = "unknown" 21:02:04 status = None 21:02:04 redirect_location = None 21:02:04 21:02:04 if error and self._is_connection_error(error): 21:02:04 # Connect retry? 21:02:04 if connect is False: 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif connect is not None: 21:02:04 connect -= 1 21:02:04 21:02:04 elif error and self._is_read_error(error): 21:02:04 # Read retry? 21:02:04 if read is False or method is None or not self._is_method_retryable(method): 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif read is not None: 21:02:04 read -= 1 21:02:04 21:02:04 elif error: 21:02:04 # Other retry? 21:02:04 if other is not None: 21:02:04 other -= 1 21:02:04 21:02:04 elif response and response.get_redirect_location(): 21:02:04 # Redirect retry? 21:02:04 if redirect is not None: 21:02:04 redirect -= 1 21:02:04 cause = "too many redirects" 21:02:04 response_redirect_location = response.get_redirect_location() 21:02:04 if response_redirect_location: 21:02:04 redirect_location = response_redirect_location 21:02:04 status = response.status 21:02:04 21:02:04 else: 21:02:04 # Incrementing because of a server error like a 500 in 21:02:04 # status_forcelist and the given method is in the allowed_methods 21:02:04 cause = ResponseError.GENERIC_ERROR 21:02:04 if response and response.status: 21:02:04 if status_count is not None: 21:02:04 status_count -= 1 21:02:04 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 21:02:04 status = response.status 21:02:04 21:02:04 history = self.history + ( 21:02:04 RequestHistory(method, url, error, status, redirect_location), 21:02:04 ) 21:02:04 21:02:04 new_retry = self.new( 21:02:04 total=total, 21:02:04 connect=connect, 21:02:04 read=read, 21:02:04 redirect=redirect, 21:02:04 status=status_count, 21:02:04 other=other, 21:02:04 history=history, 21:02:04 ) 21:02:04 21:02:04 if new_retry.is_exhausted(): 21:02:04 reason = error or ResponseError(cause) 21:02:04 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 21:02:04 21:02:04 During handling of the above exception, another exception occurred: 21:02:04 21:02:04 self = 21:02:04 21:02:04 def test_13_xpdr_portmapping_CLIENT2(self): 21:02:04 > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT2") 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 21:02:04 transportpce_tests/1.2.1/test01_portmapping.py:159: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 21:02:04 response = get_request(target_url) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 transportpce_tests/common/test_utils.py:117: in get_request 21:02:04 return requests.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 21:02:04 return session.request(method=method, url=url, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 21:02:04 resp = self.send(prep, **send_kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 21:02:04 r = adapter.send(request, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 except (ProtocolError, OSError) as err: 21:02:04 raise ConnectionError(err, request=request) 21:02:04 21:02:04 except MaxRetryError as e: 21:02:04 if isinstance(e.reason, ConnectTimeoutError): 21:02:04 # TODO: Remove this in 3.0.0: see #2811 21:02:04 if not isinstance(e.reason, NewConnectionError): 21:02:04 raise ConnectTimeout(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, ResponseError): 21:02:04 raise RetryError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _ProxyError): 21:02:04 raise ProxyError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _SSLError): 21:02:04 # This branch is for urllib3 v1.22 and later. 21:02:04 raise SSLError(e, request=request) 21:02:04 21:02:04 > raise ConnectionError(e, request=request) 21:02:04 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 21:02:04 ----------------------------- Captured stdout call ----------------------------- 21:02:04 execution of test_13_xpdr_portmapping_CLIENT2 21:02:04 _________ TestTransportPCEPortmapping.test_14_xpdr_portmapping_CLIENT3 _________ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 > sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 21:02:04 raise err 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 address = ('localhost', 8191), timeout = 30, source_address = None 21:02:04 socket_options = [(6, 1, 1)] 21:02:04 21:02:04 def create_connection( 21:02:04 address: tuple[str, int], 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 source_address: tuple[str, int] | None = None, 21:02:04 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 21:02:04 ) -> socket.socket: 21:02:04 """Connect to *address* and return the socket object. 21:02:04 21:02:04 Convenience function. Connect to *address* (a 2-tuple ``(host, 21:02:04 port)``) and return the socket object. Passing the optional 21:02:04 *timeout* parameter will set the timeout on the socket instance 21:02:04 before attempting to connect. If no *timeout* is supplied, the 21:02:04 global default timeout setting returned by :func:`socket.getdefaulttimeout` 21:02:04 is used. If *source_address* is set it must be a tuple of (host, port) 21:02:04 for the socket to bind as a source address before making the connection. 21:02:04 An host of '' or port 0 tells the OS to use the default. 21:02:04 """ 21:02:04 21:02:04 host, port = address 21:02:04 if host.startswith("["): 21:02:04 host = host.strip("[]") 21:02:04 err = None 21:02:04 21:02:04 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 21:02:04 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 21:02:04 # The original create_connection function always returns all records. 21:02:04 family = allowed_gai_family() 21:02:04 21:02:04 try: 21:02:04 host.encode("idna") 21:02:04 except UnicodeError: 21:02:04 raise LocationParseError(f"'{host}', label empty or too long") from None 21:02:04 21:02:04 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 21:02:04 af, socktype, proto, canonname, sa = res 21:02:04 sock = None 21:02:04 try: 21:02:04 sock = socket.socket(af, socktype, proto) 21:02:04 21:02:04 # If provided, set socket level options before connecting. 21:02:04 _set_socket_options(sock, socket_options) 21:02:04 21:02:04 if timeout is not _DEFAULT_TIMEOUT: 21:02:04 sock.settimeout(timeout) 21:02:04 if source_address: 21:02:04 sock.bind(source_address) 21:02:04 > sock.connect(sa) 21:02:04 E ConnectionRefusedError: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3' 21:02:04 body = None 21:02:04 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 21:02:04 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 redirect = False, assert_same_host = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 21:02:04 release_conn = False, chunked = False, body_pos = None, preload_content = False 21:02:04 decode_content = False, response_kw = {} 21:02:04 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3', query=None, fragment=None) 21:02:04 destination_scheme = None, conn = None, release_this_conn = True 21:02:04 http_tunnel_required = False, err = None, clean_exit = False 21:02:04 21:02:04 def urlopen( # type: ignore[override] 21:02:04 self, 21:02:04 method: str, 21:02:04 url: str, 21:02:04 body: _TYPE_BODY | None = None, 21:02:04 headers: typing.Mapping[str, str] | None = None, 21:02:04 retries: Retry | bool | int | None = None, 21:02:04 redirect: bool = True, 21:02:04 assert_same_host: bool = True, 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 pool_timeout: int | None = None, 21:02:04 release_conn: bool | None = None, 21:02:04 chunked: bool = False, 21:02:04 body_pos: _TYPE_BODY_POSITION | None = None, 21:02:04 preload_content: bool = True, 21:02:04 decode_content: bool = True, 21:02:04 **response_kw: typing.Any, 21:02:04 ) -> BaseHTTPResponse: 21:02:04 """ 21:02:04 Get a connection from the pool and perform an HTTP request. This is the 21:02:04 lowest level call for making a request, so you'll need to specify all 21:02:04 the raw details. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 More commonly, it's appropriate to use a convenience method 21:02:04 such as :meth:`request`. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 `release_conn` will only behave as expected if 21:02:04 `preload_content=False` because we want to make 21:02:04 `preload_content=False` the default behaviour someday soon without 21:02:04 breaking backwards compatibility. 21:02:04 21:02:04 :param method: 21:02:04 HTTP request method (such as GET, POST, PUT, etc.) 21:02:04 21:02:04 :param url: 21:02:04 The URL to perform the request on. 21:02:04 21:02:04 :param body: 21:02:04 Data to send in the request body, either :class:`str`, :class:`bytes`, 21:02:04 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 21:02:04 21:02:04 :param headers: 21:02:04 Dictionary of custom headers to send, such as User-Agent, 21:02:04 If-None-Match, etc. If None, pool headers are used. If provided, 21:02:04 these headers completely replace any pool-specific headers. 21:02:04 21:02:04 :param retries: 21:02:04 Configure the number of retries to allow before raising a 21:02:04 :class:`~urllib3.exceptions.MaxRetryError` exception. 21:02:04 21:02:04 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 21:02:04 :class:`~urllib3.util.retry.Retry` object for fine-grained control 21:02:04 over different types of retries. 21:02:04 Pass an integer number to retry connection errors that many times, 21:02:04 but no other types of errors. Pass zero to never retry. 21:02:04 21:02:04 If ``False``, then retries are disabled and any exception is raised 21:02:04 immediately. Also, instead of raising a MaxRetryError on redirects, 21:02:04 the redirect response will be returned. 21:02:04 21:02:04 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 21:02:04 21:02:04 :param redirect: 21:02:04 If True, automatically handle redirects (status codes 301, 302, 21:02:04 303, 307, 308). Each redirect counts as a retry. Disabling retries 21:02:04 will disable redirect, too. 21:02:04 21:02:04 :param assert_same_host: 21:02:04 If ``True``, will make sure that the host of the pool requests is 21:02:04 consistent else will raise HostChangedError. When ``False``, you can 21:02:04 use the pool on an HTTP proxy and request foreign hosts. 21:02:04 21:02:04 :param timeout: 21:02:04 If specified, overrides the default timeout for this one 21:02:04 request. It may be a float (in seconds) or an instance of 21:02:04 :class:`urllib3.util.Timeout`. 21:02:04 21:02:04 :param pool_timeout: 21:02:04 If set and the pool is set to block=True, then this method will 21:02:04 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 21:02:04 connection is available within the time period. 21:02:04 21:02:04 :param bool preload_content: 21:02:04 If True, the response's body will be preloaded into memory. 21:02:04 21:02:04 :param bool decode_content: 21:02:04 If True, will attempt to decode the body based on the 21:02:04 'content-encoding' header. 21:02:04 21:02:04 :param release_conn: 21:02:04 If False, then the urlopen call will not release the connection 21:02:04 back into the pool once a response is received (but will release if 21:02:04 you read the entire contents of the response such as when 21:02:04 `preload_content=True`). This is useful if you're not preloading 21:02:04 the response's content immediately. You will need to call 21:02:04 ``r.release_conn()`` on the response ``r`` to return the connection 21:02:04 back into the pool. If None, it takes the value of ``preload_content`` 21:02:04 which defaults to ``True``. 21:02:04 21:02:04 :param bool chunked: 21:02:04 If True, urllib3 will send the body using chunked transfer 21:02:04 encoding. Otherwise, urllib3 will send the body using the standard 21:02:04 content-length form. Defaults to False. 21:02:04 21:02:04 :param int body_pos: 21:02:04 Position to seek to in file-like body in the event of a retry or 21:02:04 redirect. Typically this won't need to be set because urllib3 will 21:02:04 auto-populate the value when needed. 21:02:04 """ 21:02:04 parsed_url = parse_url(url) 21:02:04 destination_scheme = parsed_url.scheme 21:02:04 21:02:04 if headers is None: 21:02:04 headers = self.headers 21:02:04 21:02:04 if not isinstance(retries, Retry): 21:02:04 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 21:02:04 21:02:04 if release_conn is None: 21:02:04 release_conn = preload_content 21:02:04 21:02:04 # Check host 21:02:04 if assert_same_host and not self.is_same_host(url): 21:02:04 raise HostChangedError(self, url, retries) 21:02:04 21:02:04 # Ensure that the URL we're connecting to is properly encoded 21:02:04 if url.startswith("/"): 21:02:04 url = to_str(_encode_target(url)) 21:02:04 else: 21:02:04 url = to_str(parsed_url.url) 21:02:04 21:02:04 conn = None 21:02:04 21:02:04 # Track whether `conn` needs to be released before 21:02:04 # returning/raising/recursing. Update this variable if necessary, and 21:02:04 # leave `release_conn` constant throughout the function. That way, if 21:02:04 # the function recurses, the original value of `release_conn` will be 21:02:04 # passed down into the recursive call, and its value will be respected. 21:02:04 # 21:02:04 # See issue #651 [1] for details. 21:02:04 # 21:02:04 # [1] 21:02:04 release_this_conn = release_conn 21:02:04 21:02:04 http_tunnel_required = connection_requires_http_tunnel( 21:02:04 self.proxy, self.proxy_config, destination_scheme 21:02:04 ) 21:02:04 21:02:04 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 21:02:04 # have to copy the headers dict so we can safely change it without those 21:02:04 # changes being reflected in anyone else's copy. 21:02:04 if not http_tunnel_required: 21:02:04 headers = headers.copy() # type: ignore[attr-defined] 21:02:04 headers.update(self.proxy_headers) # type: ignore[union-attr] 21:02:04 21:02:04 # Must keep the exception bound to a separate variable or else Python 3 21:02:04 # complains about UnboundLocalError. 21:02:04 err = None 21:02:04 21:02:04 # Keep track of whether we cleanly exited the except block. This 21:02:04 # ensures we do proper cleanup in finally. 21:02:04 clean_exit = False 21:02:04 21:02:04 # Rewind body position, if needed. Record current position 21:02:04 # for future rewinds in the event of a redirect/retry. 21:02:04 body_pos = set_file_position(body, body_pos) 21:02:04 21:02:04 try: 21:02:04 # Request a connection from the queue. 21:02:04 timeout_obj = self._get_timeout(timeout) 21:02:04 conn = self._get_conn(timeout=pool_timeout) 21:02:04 21:02:04 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 21:02:04 21:02:04 # Is this a closed/new connection that requires CONNECT tunnelling? 21:02:04 if self.proxy is not None and http_tunnel_required and conn.is_closed: 21:02:04 try: 21:02:04 self._prepare_proxy(conn) 21:02:04 except (BaseSSLError, OSError, SocketTimeout) as e: 21:02:04 self._raise_timeout( 21:02:04 err=e, url=self.proxy.url, timeout_value=conn.timeout 21:02:04 ) 21:02:04 raise 21:02:04 21:02:04 # If we're going to release the connection in ``finally:``, then 21:02:04 # the response doesn't need to know about the connection. Otherwise 21:02:04 # it will also try to release it and we'll have a double-release 21:02:04 # mess. 21:02:04 response_conn = conn if not release_conn else None 21:02:04 21:02:04 # Make the request on the HTTPConnection object 21:02:04 > response = self._make_request( 21:02:04 conn, 21:02:04 method, 21:02:04 url, 21:02:04 timeout=timeout_obj, 21:02:04 body=body, 21:02:04 headers=headers, 21:02:04 chunked=chunked, 21:02:04 retries=retries, 21:02:04 response_conn=response_conn, 21:02:04 preload_content=preload_content, 21:02:04 decode_content=decode_content, 21:02:04 **response_kw, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 21:02:04 conn.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 21:02:04 self.endheaders() 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 21:02:04 self._send_output(message_body, encode_chunked=encode_chunked) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 21:02:04 self.send(msg) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 21:02:04 self.connect() 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 21:02:04 self.sock = self._new_conn() 21:02:04 ^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 except socket.gaierror as e: 21:02:04 raise NameResolutionError(self.host, self, e) from e 21:02:04 except SocketTimeout as e: 21:02:04 raise ConnectTimeoutError( 21:02:04 self, 21:02:04 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 21:02:04 ) from e 21:02:04 21:02:04 except OSError as e: 21:02:04 > raise NewConnectionError( 21:02:04 self, f"Failed to establish a new connection: {e}" 21:02:04 ) from e 21:02:04 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 > resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 21:02:04 retries = retries.increment( 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3' 21:02:04 response = None 21:02:04 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 21:02:04 _pool = 21:02:04 _stacktrace = 21:02:04 21:02:04 def increment( 21:02:04 self, 21:02:04 method: str | None = None, 21:02:04 url: str | None = None, 21:02:04 response: BaseHTTPResponse | None = None, 21:02:04 error: Exception | None = None, 21:02:04 _pool: ConnectionPool | None = None, 21:02:04 _stacktrace: TracebackType | None = None, 21:02:04 ) -> Self: 21:02:04 """Return a new Retry object with incremented retry counters. 21:02:04 21:02:04 :param response: A response object, or None, if the server did not 21:02:04 return a response. 21:02:04 :type response: :class:`~urllib3.response.BaseHTTPResponse` 21:02:04 :param Exception error: An error encountered during the request, or 21:02:04 None if the response was received successfully. 21:02:04 21:02:04 :return: A new ``Retry`` object. 21:02:04 """ 21:02:04 if self.total is False and error: 21:02:04 # Disabled, indicate to re-raise the error. 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 21:02:04 total = self.total 21:02:04 if total is not None: 21:02:04 total -= 1 21:02:04 21:02:04 connect = self.connect 21:02:04 read = self.read 21:02:04 redirect = self.redirect 21:02:04 status_count = self.status 21:02:04 other = self.other 21:02:04 cause = "unknown" 21:02:04 status = None 21:02:04 redirect_location = None 21:02:04 21:02:04 if error and self._is_connection_error(error): 21:02:04 # Connect retry? 21:02:04 if connect is False: 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif connect is not None: 21:02:04 connect -= 1 21:02:04 21:02:04 elif error and self._is_read_error(error): 21:02:04 # Read retry? 21:02:04 if read is False or method is None or not self._is_method_retryable(method): 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif read is not None: 21:02:04 read -= 1 21:02:04 21:02:04 elif error: 21:02:04 # Other retry? 21:02:04 if other is not None: 21:02:04 other -= 1 21:02:04 21:02:04 elif response and response.get_redirect_location(): 21:02:04 # Redirect retry? 21:02:04 if redirect is not None: 21:02:04 redirect -= 1 21:02:04 cause = "too many redirects" 21:02:04 response_redirect_location = response.get_redirect_location() 21:02:04 if response_redirect_location: 21:02:04 redirect_location = response_redirect_location 21:02:04 status = response.status 21:02:04 21:02:04 else: 21:02:04 # Incrementing because of a server error like a 500 in 21:02:04 # status_forcelist and the given method is in the allowed_methods 21:02:04 cause = ResponseError.GENERIC_ERROR 21:02:04 if response and response.status: 21:02:04 if status_count is not None: 21:02:04 status_count -= 1 21:02:04 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 21:02:04 status = response.status 21:02:04 21:02:04 history = self.history + ( 21:02:04 RequestHistory(method, url, error, status, redirect_location), 21:02:04 ) 21:02:04 21:02:04 new_retry = self.new( 21:02:04 total=total, 21:02:04 connect=connect, 21:02:04 read=read, 21:02:04 redirect=redirect, 21:02:04 status=status_count, 21:02:04 other=other, 21:02:04 history=history, 21:02:04 ) 21:02:04 21:02:04 if new_retry.is_exhausted(): 21:02:04 reason = error or ResponseError(cause) 21:02:04 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 21:02:04 21:02:04 During handling of the above exception, another exception occurred: 21:02:04 21:02:04 self = 21:02:04 21:02:04 def test_14_xpdr_portmapping_CLIENT3(self): 21:02:04 > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT3") 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 21:02:04 transportpce_tests/1.2.1/test01_portmapping.py:170: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 21:02:04 response = get_request(target_url) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 transportpce_tests/common/test_utils.py:117: in get_request 21:02:04 return requests.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 21:02:04 return session.request(method=method, url=url, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 21:02:04 resp = self.send(prep, **send_kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 21:02:04 r = adapter.send(request, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 except (ProtocolError, OSError) as err: 21:02:04 raise ConnectionError(err, request=request) 21:02:04 21:02:04 except MaxRetryError as e: 21:02:04 if isinstance(e.reason, ConnectTimeoutError): 21:02:04 # TODO: Remove this in 3.0.0: see #2811 21:02:04 if not isinstance(e.reason, NewConnectionError): 21:02:04 raise ConnectTimeout(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, ResponseError): 21:02:04 raise RetryError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _ProxyError): 21:02:04 raise ProxyError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _SSLError): 21:02:04 # This branch is for urllib3 v1.22 and later. 21:02:04 raise SSLError(e, request=request) 21:02:04 21:02:04 > raise ConnectionError(e, request=request) 21:02:04 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 21:02:04 ----------------------------- Captured stdout call ----------------------------- 21:02:04 execution of test_14_xpdr_portmapping_CLIENT3 21:02:04 _________ TestTransportPCEPortmapping.test_15_xpdr_portmapping_CLIENT4 _________ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 > sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 21:02:04 raise err 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 address = ('localhost', 8191), timeout = 30, source_address = None 21:02:04 socket_options = [(6, 1, 1)] 21:02:04 21:02:04 def create_connection( 21:02:04 address: tuple[str, int], 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 source_address: tuple[str, int] | None = None, 21:02:04 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 21:02:04 ) -> socket.socket: 21:02:04 """Connect to *address* and return the socket object. 21:02:04 21:02:04 Convenience function. Connect to *address* (a 2-tuple ``(host, 21:02:04 port)``) and return the socket object. Passing the optional 21:02:04 *timeout* parameter will set the timeout on the socket instance 21:02:04 before attempting to connect. If no *timeout* is supplied, the 21:02:04 global default timeout setting returned by :func:`socket.getdefaulttimeout` 21:02:04 is used. If *source_address* is set it must be a tuple of (host, port) 21:02:04 for the socket to bind as a source address before making the connection. 21:02:04 An host of '' or port 0 tells the OS to use the default. 21:02:04 """ 21:02:04 21:02:04 host, port = address 21:02:04 if host.startswith("["): 21:02:04 host = host.strip("[]") 21:02:04 err = None 21:02:04 21:02:04 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 21:02:04 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 21:02:04 # The original create_connection function always returns all records. 21:02:04 family = allowed_gai_family() 21:02:04 21:02:04 try: 21:02:04 host.encode("idna") 21:02:04 except UnicodeError: 21:02:04 raise LocationParseError(f"'{host}', label empty or too long") from None 21:02:04 21:02:04 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 21:02:04 af, socktype, proto, canonname, sa = res 21:02:04 sock = None 21:02:04 try: 21:02:04 sock = socket.socket(af, socktype, proto) 21:02:04 21:02:04 # If provided, set socket level options before connecting. 21:02:04 _set_socket_options(sock, socket_options) 21:02:04 21:02:04 if timeout is not _DEFAULT_TIMEOUT: 21:02:04 sock.settimeout(timeout) 21:02:04 if source_address: 21:02:04 sock.bind(source_address) 21:02:04 > sock.connect(sa) 21:02:04 E ConnectionRefusedError: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4' 21:02:04 body = None 21:02:04 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 21:02:04 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 redirect = False, assert_same_host = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 21:02:04 release_conn = False, chunked = False, body_pos = None, preload_content = False 21:02:04 decode_content = False, response_kw = {} 21:02:04 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4', query=None, fragment=None) 21:02:04 destination_scheme = None, conn = None, release_this_conn = True 21:02:04 http_tunnel_required = False, err = None, clean_exit = False 21:02:04 21:02:04 def urlopen( # type: ignore[override] 21:02:04 self, 21:02:04 method: str, 21:02:04 url: str, 21:02:04 body: _TYPE_BODY | None = None, 21:02:04 headers: typing.Mapping[str, str] | None = None, 21:02:04 retries: Retry | bool | int | None = None, 21:02:04 redirect: bool = True, 21:02:04 assert_same_host: bool = True, 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 pool_timeout: int | None = None, 21:02:04 release_conn: bool | None = None, 21:02:04 chunked: bool = False, 21:02:04 body_pos: _TYPE_BODY_POSITION | None = None, 21:02:04 preload_content: bool = True, 21:02:04 decode_content: bool = True, 21:02:04 **response_kw: typing.Any, 21:02:04 ) -> BaseHTTPResponse: 21:02:04 """ 21:02:04 Get a connection from the pool and perform an HTTP request. This is the 21:02:04 lowest level call for making a request, so you'll need to specify all 21:02:04 the raw details. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 More commonly, it's appropriate to use a convenience method 21:02:04 such as :meth:`request`. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 `release_conn` will only behave as expected if 21:02:04 `preload_content=False` because we want to make 21:02:04 `preload_content=False` the default behaviour someday soon without 21:02:04 breaking backwards compatibility. 21:02:04 21:02:04 :param method: 21:02:04 HTTP request method (such as GET, POST, PUT, etc.) 21:02:04 21:02:04 :param url: 21:02:04 The URL to perform the request on. 21:02:04 21:02:04 :param body: 21:02:04 Data to send in the request body, either :class:`str`, :class:`bytes`, 21:02:04 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 21:02:04 21:02:04 :param headers: 21:02:04 Dictionary of custom headers to send, such as User-Agent, 21:02:04 If-None-Match, etc. If None, pool headers are used. If provided, 21:02:04 these headers completely replace any pool-specific headers. 21:02:04 21:02:04 :param retries: 21:02:04 Configure the number of retries to allow before raising a 21:02:04 :class:`~urllib3.exceptions.MaxRetryError` exception. 21:02:04 21:02:04 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 21:02:04 :class:`~urllib3.util.retry.Retry` object for fine-grained control 21:02:04 over different types of retries. 21:02:04 Pass an integer number to retry connection errors that many times, 21:02:04 but no other types of errors. Pass zero to never retry. 21:02:04 21:02:04 If ``False``, then retries are disabled and any exception is raised 21:02:04 immediately. Also, instead of raising a MaxRetryError on redirects, 21:02:04 the redirect response will be returned. 21:02:04 21:02:04 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 21:02:04 21:02:04 :param redirect: 21:02:04 If True, automatically handle redirects (status codes 301, 302, 21:02:04 303, 307, 308). Each redirect counts as a retry. Disabling retries 21:02:04 will disable redirect, too. 21:02:04 21:02:04 :param assert_same_host: 21:02:04 If ``True``, will make sure that the host of the pool requests is 21:02:04 consistent else will raise HostChangedError. When ``False``, you can 21:02:04 use the pool on an HTTP proxy and request foreign hosts. 21:02:04 21:02:04 :param timeout: 21:02:04 If specified, overrides the default timeout for this one 21:02:04 request. It may be a float (in seconds) or an instance of 21:02:04 :class:`urllib3.util.Timeout`. 21:02:04 21:02:04 :param pool_timeout: 21:02:04 If set and the pool is set to block=True, then this method will 21:02:04 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 21:02:04 connection is available within the time period. 21:02:04 21:02:04 :param bool preload_content: 21:02:04 If True, the response's body will be preloaded into memory. 21:02:04 21:02:04 :param bool decode_content: 21:02:04 If True, will attempt to decode the body based on the 21:02:04 'content-encoding' header. 21:02:04 21:02:04 :param release_conn: 21:02:04 If False, then the urlopen call will not release the connection 21:02:04 back into the pool once a response is received (but will release if 21:02:04 you read the entire contents of the response such as when 21:02:04 `preload_content=True`). This is useful if you're not preloading 21:02:04 the response's content immediately. You will need to call 21:02:04 ``r.release_conn()`` on the response ``r`` to return the connection 21:02:04 back into the pool. If None, it takes the value of ``preload_content`` 21:02:04 which defaults to ``True``. 21:02:04 21:02:04 :param bool chunked: 21:02:04 If True, urllib3 will send the body using chunked transfer 21:02:04 encoding. Otherwise, urllib3 will send the body using the standard 21:02:04 content-length form. Defaults to False. 21:02:04 21:02:04 :param int body_pos: 21:02:04 Position to seek to in file-like body in the event of a retry or 21:02:04 redirect. Typically this won't need to be set because urllib3 will 21:02:04 auto-populate the value when needed. 21:02:04 """ 21:02:04 parsed_url = parse_url(url) 21:02:04 destination_scheme = parsed_url.scheme 21:02:04 21:02:04 if headers is None: 21:02:04 headers = self.headers 21:02:04 21:02:04 if not isinstance(retries, Retry): 21:02:04 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 21:02:04 21:02:04 if release_conn is None: 21:02:04 release_conn = preload_content 21:02:04 21:02:04 # Check host 21:02:04 if assert_same_host and not self.is_same_host(url): 21:02:04 raise HostChangedError(self, url, retries) 21:02:04 21:02:04 # Ensure that the URL we're connecting to is properly encoded 21:02:04 if url.startswith("/"): 21:02:04 url = to_str(_encode_target(url)) 21:02:04 else: 21:02:04 url = to_str(parsed_url.url) 21:02:04 21:02:04 conn = None 21:02:04 21:02:04 # Track whether `conn` needs to be released before 21:02:04 # returning/raising/recursing. Update this variable if necessary, and 21:02:04 # leave `release_conn` constant throughout the function. That way, if 21:02:04 # the function recurses, the original value of `release_conn` will be 21:02:04 # passed down into the recursive call, and its value will be respected. 21:02:04 # 21:02:04 # See issue #651 [1] for details. 21:02:04 # 21:02:04 # [1] 21:02:04 release_this_conn = release_conn 21:02:04 21:02:04 http_tunnel_required = connection_requires_http_tunnel( 21:02:04 self.proxy, self.proxy_config, destination_scheme 21:02:04 ) 21:02:04 21:02:04 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 21:02:04 # have to copy the headers dict so we can safely change it without those 21:02:04 # changes being reflected in anyone else's copy. 21:02:04 if not http_tunnel_required: 21:02:04 headers = headers.copy() # type: ignore[attr-defined] 21:02:04 headers.update(self.proxy_headers) # type: ignore[union-attr] 21:02:04 21:02:04 # Must keep the exception bound to a separate variable or else Python 3 21:02:04 # complains about UnboundLocalError. 21:02:04 err = None 21:02:04 21:02:04 # Keep track of whether we cleanly exited the except block. This 21:02:04 # ensures we do proper cleanup in finally. 21:02:04 clean_exit = False 21:02:04 21:02:04 # Rewind body position, if needed. Record current position 21:02:04 # for future rewinds in the event of a redirect/retry. 21:02:04 body_pos = set_file_position(body, body_pos) 21:02:04 21:02:04 try: 21:02:04 # Request a connection from the queue. 21:02:04 timeout_obj = self._get_timeout(timeout) 21:02:04 conn = self._get_conn(timeout=pool_timeout) 21:02:04 21:02:04 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 21:02:04 21:02:04 # Is this a closed/new connection that requires CONNECT tunnelling? 21:02:04 if self.proxy is not None and http_tunnel_required and conn.is_closed: 21:02:04 try: 21:02:04 self._prepare_proxy(conn) 21:02:04 except (BaseSSLError, OSError, SocketTimeout) as e: 21:02:04 self._raise_timeout( 21:02:04 err=e, url=self.proxy.url, timeout_value=conn.timeout 21:02:04 ) 21:02:04 raise 21:02:04 21:02:04 # If we're going to release the connection in ``finally:``, then 21:02:04 # the response doesn't need to know about the connection. Otherwise 21:02:04 # it will also try to release it and we'll have a double-release 21:02:04 # mess. 21:02:04 response_conn = conn if not release_conn else None 21:02:04 21:02:04 # Make the request on the HTTPConnection object 21:02:04 > response = self._make_request( 21:02:04 conn, 21:02:04 method, 21:02:04 url, 21:02:04 timeout=timeout_obj, 21:02:04 body=body, 21:02:04 headers=headers, 21:02:04 chunked=chunked, 21:02:04 retries=retries, 21:02:04 response_conn=response_conn, 21:02:04 preload_content=preload_content, 21:02:04 decode_content=decode_content, 21:02:04 **response_kw, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 21:02:04 conn.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 21:02:04 self.endheaders() 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 21:02:04 self._send_output(message_body, encode_chunked=encode_chunked) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 21:02:04 self.send(msg) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 21:02:04 self.connect() 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 21:02:04 self.sock = self._new_conn() 21:02:04 ^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 except socket.gaierror as e: 21:02:04 raise NameResolutionError(self.host, self, e) from e 21:02:04 except SocketTimeout as e: 21:02:04 raise ConnectTimeoutError( 21:02:04 self, 21:02:04 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 21:02:04 ) from e 21:02:04 21:02:04 except OSError as e: 21:02:04 > raise NewConnectionError( 21:02:04 self, f"Failed to establish a new connection: {e}" 21:02:04 ) from e 21:02:04 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 > resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 21:02:04 retries = retries.increment( 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4' 21:02:04 response = None 21:02:04 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 21:02:04 _pool = 21:02:04 _stacktrace = 21:02:04 21:02:04 def increment( 21:02:04 self, 21:02:04 method: str | None = None, 21:02:04 url: str | None = None, 21:02:04 response: BaseHTTPResponse | None = None, 21:02:04 error: Exception | None = None, 21:02:04 _pool: ConnectionPool | None = None, 21:02:04 _stacktrace: TracebackType | None = None, 21:02:04 ) -> Self: 21:02:04 """Return a new Retry object with incremented retry counters. 21:02:04 21:02:04 :param response: A response object, or None, if the server did not 21:02:04 return a response. 21:02:04 :type response: :class:`~urllib3.response.BaseHTTPResponse` 21:02:04 :param Exception error: An error encountered during the request, or 21:02:04 None if the response was received successfully. 21:02:04 21:02:04 :return: A new ``Retry`` object. 21:02:04 """ 21:02:04 if self.total is False and error: 21:02:04 # Disabled, indicate to re-raise the error. 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 21:02:04 total = self.total 21:02:04 if total is not None: 21:02:04 total -= 1 21:02:04 21:02:04 connect = self.connect 21:02:04 read = self.read 21:02:04 redirect = self.redirect 21:02:04 status_count = self.status 21:02:04 other = self.other 21:02:04 cause = "unknown" 21:02:04 status = None 21:02:04 redirect_location = None 21:02:04 21:02:04 if error and self._is_connection_error(error): 21:02:04 # Connect retry? 21:02:04 if connect is False: 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif connect is not None: 21:02:04 connect -= 1 21:02:04 21:02:04 elif error and self._is_read_error(error): 21:02:04 # Read retry? 21:02:04 if read is False or method is None or not self._is_method_retryable(method): 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif read is not None: 21:02:04 read -= 1 21:02:04 21:02:04 elif error: 21:02:04 # Other retry? 21:02:04 if other is not None: 21:02:04 other -= 1 21:02:04 21:02:04 elif response and response.get_redirect_location(): 21:02:04 # Redirect retry? 21:02:04 if redirect is not None: 21:02:04 redirect -= 1 21:02:04 cause = "too many redirects" 21:02:04 response_redirect_location = response.get_redirect_location() 21:02:04 if response_redirect_location: 21:02:04 redirect_location = response_redirect_location 21:02:04 status = response.status 21:02:04 21:02:04 else: 21:02:04 # Incrementing because of a server error like a 500 in 21:02:04 # status_forcelist and the given method is in the allowed_methods 21:02:04 cause = ResponseError.GENERIC_ERROR 21:02:04 if response and response.status: 21:02:04 if status_count is not None: 21:02:04 status_count -= 1 21:02:04 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 21:02:04 status = response.status 21:02:04 21:02:04 history = self.history + ( 21:02:04 RequestHistory(method, url, error, status, redirect_location), 21:02:04 ) 21:02:04 21:02:04 new_retry = self.new( 21:02:04 total=total, 21:02:04 connect=connect, 21:02:04 read=read, 21:02:04 redirect=redirect, 21:02:04 status=status_count, 21:02:04 other=other, 21:02:04 history=history, 21:02:04 ) 21:02:04 21:02:04 if new_retry.is_exhausted(): 21:02:04 reason = error or ResponseError(cause) 21:02:04 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 21:02:04 21:02:04 During handling of the above exception, another exception occurred: 21:02:04 21:02:04 self = 21:02:04 21:02:04 def test_15_xpdr_portmapping_CLIENT4(self): 21:02:04 > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT4") 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 21:02:04 transportpce_tests/1.2.1/test01_portmapping.py:182: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 21:02:04 response = get_request(target_url) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 transportpce_tests/common/test_utils.py:117: in get_request 21:02:04 return requests.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 21:02:04 return session.request(method=method, url=url, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 21:02:04 resp = self.send(prep, **send_kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 21:02:04 r = adapter.send(request, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 except (ProtocolError, OSError) as err: 21:02:04 raise ConnectionError(err, request=request) 21:02:04 21:02:04 except MaxRetryError as e: 21:02:04 if isinstance(e.reason, ConnectTimeoutError): 21:02:04 # TODO: Remove this in 3.0.0: see #2811 21:02:04 if not isinstance(e.reason, NewConnectionError): 21:02:04 raise ConnectTimeout(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, ResponseError): 21:02:04 raise RetryError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _ProxyError): 21:02:04 raise ProxyError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _SSLError): 21:02:04 # This branch is for urllib3 v1.22 and later. 21:02:04 raise SSLError(e, request=request) 21:02:04 21:02:04 > raise ConnectionError(e, request=request) 21:02:04 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 21:02:04 ----------------------------- Captured stdout call ----------------------------- 21:02:04 execution of test_15_xpdr_portmapping_CLIENT4 21:02:04 ________ TestTransportPCEPortmapping.test_16_xpdr_device_disconnection _________ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 > sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 21:02:04 raise err 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 address = ('localhost', 8191), timeout = 30, source_address = None 21:02:04 socket_options = [(6, 1, 1)] 21:02:04 21:02:04 def create_connection( 21:02:04 address: tuple[str, int], 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 source_address: tuple[str, int] | None = None, 21:02:04 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 21:02:04 ) -> socket.socket: 21:02:04 """Connect to *address* and return the socket object. 21:02:04 21:02:04 Convenience function. Connect to *address* (a 2-tuple ``(host, 21:02:04 port)``) and return the socket object. Passing the optional 21:02:04 *timeout* parameter will set the timeout on the socket instance 21:02:04 before attempting to connect. If no *timeout* is supplied, the 21:02:04 global default timeout setting returned by :func:`socket.getdefaulttimeout` 21:02:04 is used. If *source_address* is set it must be a tuple of (host, port) 21:02:04 for the socket to bind as a source address before making the connection. 21:02:04 An host of '' or port 0 tells the OS to use the default. 21:02:04 """ 21:02:04 21:02:04 host, port = address 21:02:04 if host.startswith("["): 21:02:04 host = host.strip("[]") 21:02:04 err = None 21:02:04 21:02:04 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 21:02:04 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 21:02:04 # The original create_connection function always returns all records. 21:02:04 family = allowed_gai_family() 21:02:04 21:02:04 try: 21:02:04 host.encode("idna") 21:02:04 except UnicodeError: 21:02:04 raise LocationParseError(f"'{host}', label empty or too long") from None 21:02:04 21:02:04 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 21:02:04 af, socktype, proto, canonname, sa = res 21:02:04 sock = None 21:02:04 try: 21:02:04 sock = socket.socket(af, socktype, proto) 21:02:04 21:02:04 # If provided, set socket level options before connecting. 21:02:04 _set_socket_options(sock, socket_options) 21:02:04 21:02:04 if timeout is not _DEFAULT_TIMEOUT: 21:02:04 sock.settimeout(timeout) 21:02:04 if source_address: 21:02:04 sock.bind(source_address) 21:02:04 > sock.connect(sa) 21:02:04 E ConnectionRefusedError: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 method = 'DELETE' 21:02:04 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01' 21:02:04 body = None 21:02:04 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 21:02:04 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 redirect = False, assert_same_host = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 21:02:04 release_conn = False, chunked = False, body_pos = None, preload_content = False 21:02:04 decode_content = False, response_kw = {} 21:02:04 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01', query=None, fragment=None) 21:02:04 destination_scheme = None, conn = None, release_this_conn = True 21:02:04 http_tunnel_required = False, err = None, clean_exit = False 21:02:04 21:02:04 def urlopen( # type: ignore[override] 21:02:04 self, 21:02:04 method: str, 21:02:04 url: str, 21:02:04 body: _TYPE_BODY | None = None, 21:02:04 headers: typing.Mapping[str, str] | None = None, 21:02:04 retries: Retry | bool | int | None = None, 21:02:04 redirect: bool = True, 21:02:04 assert_same_host: bool = True, 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 pool_timeout: int | None = None, 21:02:04 release_conn: bool | None = None, 21:02:04 chunked: bool = False, 21:02:04 body_pos: _TYPE_BODY_POSITION | None = None, 21:02:04 preload_content: bool = True, 21:02:04 decode_content: bool = True, 21:02:04 **response_kw: typing.Any, 21:02:04 ) -> BaseHTTPResponse: 21:02:04 """ 21:02:04 Get a connection from the pool and perform an HTTP request. This is the 21:02:04 lowest level call for making a request, so you'll need to specify all 21:02:04 the raw details. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 More commonly, it's appropriate to use a convenience method 21:02:04 such as :meth:`request`. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 `release_conn` will only behave as expected if 21:02:04 `preload_content=False` because we want to make 21:02:04 `preload_content=False` the default behaviour someday soon without 21:02:04 breaking backwards compatibility. 21:02:04 21:02:04 :param method: 21:02:04 HTTP request method (such as GET, POST, PUT, etc.) 21:02:04 21:02:04 :param url: 21:02:04 The URL to perform the request on. 21:02:04 21:02:04 :param body: 21:02:04 Data to send in the request body, either :class:`str`, :class:`bytes`, 21:02:04 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 21:02:04 21:02:04 :param headers: 21:02:04 Dictionary of custom headers to send, such as User-Agent, 21:02:04 If-None-Match, etc. If None, pool headers are used. If provided, 21:02:04 these headers completely replace any pool-specific headers. 21:02:04 21:02:04 :param retries: 21:02:04 Configure the number of retries to allow before raising a 21:02:04 :class:`~urllib3.exceptions.MaxRetryError` exception. 21:02:04 21:02:04 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 21:02:04 :class:`~urllib3.util.retry.Retry` object for fine-grained control 21:02:04 over different types of retries. 21:02:04 Pass an integer number to retry connection errors that many times, 21:02:04 but no other types of errors. Pass zero to never retry. 21:02:04 21:02:04 If ``False``, then retries are disabled and any exception is raised 21:02:04 immediately. Also, instead of raising a MaxRetryError on redirects, 21:02:04 the redirect response will be returned. 21:02:04 21:02:04 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 21:02:04 21:02:04 :param redirect: 21:02:04 If True, automatically handle redirects (status codes 301, 302, 21:02:04 303, 307, 308). Each redirect counts as a retry. Disabling retries 21:02:04 will disable redirect, too. 21:02:04 21:02:04 :param assert_same_host: 21:02:04 If ``True``, will make sure that the host of the pool requests is 21:02:04 consistent else will raise HostChangedError. When ``False``, you can 21:02:04 use the pool on an HTTP proxy and request foreign hosts. 21:02:04 21:02:04 :param timeout: 21:02:04 If specified, overrides the default timeout for this one 21:02:04 request. It may be a float (in seconds) or an instance of 21:02:04 :class:`urllib3.util.Timeout`. 21:02:04 21:02:04 :param pool_timeout: 21:02:04 If set and the pool is set to block=True, then this method will 21:02:04 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 21:02:04 connection is available within the time period. 21:02:04 21:02:04 :param bool preload_content: 21:02:04 If True, the response's body will be preloaded into memory. 21:02:04 21:02:04 :param bool decode_content: 21:02:04 If True, will attempt to decode the body based on the 21:02:04 'content-encoding' header. 21:02:04 21:02:04 :param release_conn: 21:02:04 If False, then the urlopen call will not release the connection 21:02:04 back into the pool once a response is received (but will release if 21:02:04 you read the entire contents of the response such as when 21:02:04 `preload_content=True`). This is useful if you're not preloading 21:02:04 the response's content immediately. You will need to call 21:02:04 ``r.release_conn()`` on the response ``r`` to return the connection 21:02:04 back into the pool. If None, it takes the value of ``preload_content`` 21:02:04 which defaults to ``True``. 21:02:04 21:02:04 :param bool chunked: 21:02:04 If True, urllib3 will send the body using chunked transfer 21:02:04 encoding. Otherwise, urllib3 will send the body using the standard 21:02:04 content-length form. Defaults to False. 21:02:04 21:02:04 :param int body_pos: 21:02:04 Position to seek to in file-like body in the event of a retry or 21:02:04 redirect. Typically this won't need to be set because urllib3 will 21:02:04 auto-populate the value when needed. 21:02:04 """ 21:02:04 parsed_url = parse_url(url) 21:02:04 destination_scheme = parsed_url.scheme 21:02:04 21:02:04 if headers is None: 21:02:04 headers = self.headers 21:02:04 21:02:04 if not isinstance(retries, Retry): 21:02:04 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 21:02:04 21:02:04 if release_conn is None: 21:02:04 release_conn = preload_content 21:02:04 21:02:04 # Check host 21:02:04 if assert_same_host and not self.is_same_host(url): 21:02:04 raise HostChangedError(self, url, retries) 21:02:04 21:02:04 # Ensure that the URL we're connecting to is properly encoded 21:02:04 if url.startswith("/"): 21:02:04 url = to_str(_encode_target(url)) 21:02:04 else: 21:02:04 url = to_str(parsed_url.url) 21:02:04 21:02:04 conn = None 21:02:04 21:02:04 # Track whether `conn` needs to be released before 21:02:04 # returning/raising/recursing. Update this variable if necessary, and 21:02:04 # leave `release_conn` constant throughout the function. That way, if 21:02:04 # the function recurses, the original value of `release_conn` will be 21:02:04 # passed down into the recursive call, and its value will be respected. 21:02:04 # 21:02:04 # See issue #651 [1] for details. 21:02:04 # 21:02:04 # [1] 21:02:04 release_this_conn = release_conn 21:02:04 21:02:04 http_tunnel_required = connection_requires_http_tunnel( 21:02:04 self.proxy, self.proxy_config, destination_scheme 21:02:04 ) 21:02:04 21:02:04 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 21:02:04 # have to copy the headers dict so we can safely change it without those 21:02:04 # changes being reflected in anyone else's copy. 21:02:04 if not http_tunnel_required: 21:02:04 headers = headers.copy() # type: ignore[attr-defined] 21:02:04 headers.update(self.proxy_headers) # type: ignore[union-attr] 21:02:04 21:02:04 # Must keep the exception bound to a separate variable or else Python 3 21:02:04 # complains about UnboundLocalError. 21:02:04 err = None 21:02:04 21:02:04 # Keep track of whether we cleanly exited the except block. This 21:02:04 # ensures we do proper cleanup in finally. 21:02:04 clean_exit = False 21:02:04 21:02:04 # Rewind body position, if needed. Record current position 21:02:04 # for future rewinds in the event of a redirect/retry. 21:02:04 body_pos = set_file_position(body, body_pos) 21:02:04 21:02:04 try: 21:02:04 # Request a connection from the queue. 21:02:04 timeout_obj = self._get_timeout(timeout) 21:02:04 conn = self._get_conn(timeout=pool_timeout) 21:02:04 21:02:04 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 21:02:04 21:02:04 # Is this a closed/new connection that requires CONNECT tunnelling? 21:02:04 if self.proxy is not None and http_tunnel_required and conn.is_closed: 21:02:04 try: 21:02:04 self._prepare_proxy(conn) 21:02:04 except (BaseSSLError, OSError, SocketTimeout) as e: 21:02:04 self._raise_timeout( 21:02:04 err=e, url=self.proxy.url, timeout_value=conn.timeout 21:02:04 ) 21:02:04 raise 21:02:04 21:02:04 # If we're going to release the connection in ``finally:``, then 21:02:04 # the response doesn't need to know about the connection. Otherwise 21:02:04 # it will also try to release it and we'll have a double-release 21:02:04 # mess. 21:02:04 response_conn = conn if not release_conn else None 21:02:04 21:02:04 # Make the request on the HTTPConnection object 21:02:04 > response = self._make_request( 21:02:04 conn, 21:02:04 method, 21:02:04 url, 21:02:04 timeout=timeout_obj, 21:02:04 body=body, 21:02:04 headers=headers, 21:02:04 chunked=chunked, 21:02:04 retries=retries, 21:02:04 response_conn=response_conn, 21:02:04 preload_content=preload_content, 21:02:04 decode_content=decode_content, 21:02:04 **response_kw, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 21:02:04 conn.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 21:02:04 self.endheaders() 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 21:02:04 self._send_output(message_body, encode_chunked=encode_chunked) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 21:02:04 self.send(msg) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 21:02:04 self.connect() 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 21:02:04 self.sock = self._new_conn() 21:02:04 ^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 except socket.gaierror as e: 21:02:04 raise NameResolutionError(self.host, self, e) from e 21:02:04 except SocketTimeout as e: 21:02:04 raise ConnectTimeoutError( 21:02:04 self, 21:02:04 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 21:02:04 ) from e 21:02:04 21:02:04 except OSError as e: 21:02:04 > raise NewConnectionError( 21:02:04 self, f"Failed to establish a new connection: {e}" 21:02:04 ) from e 21:02:04 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 > resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 21:02:04 retries = retries.increment( 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 method = 'DELETE' 21:02:04 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01' 21:02:04 response = None 21:02:04 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 21:02:04 _pool = 21:02:04 _stacktrace = 21:02:04 21:02:04 def increment( 21:02:04 self, 21:02:04 method: str | None = None, 21:02:04 url: str | None = None, 21:02:04 response: BaseHTTPResponse | None = None, 21:02:04 error: Exception | None = None, 21:02:04 _pool: ConnectionPool | None = None, 21:02:04 _stacktrace: TracebackType | None = None, 21:02:04 ) -> Self: 21:02:04 """Return a new Retry object with incremented retry counters. 21:02:04 21:02:04 :param response: A response object, or None, if the server did not 21:02:04 return a response. 21:02:04 :type response: :class:`~urllib3.response.BaseHTTPResponse` 21:02:04 :param Exception error: An error encountered during the request, or 21:02:04 None if the response was received successfully. 21:02:04 21:02:04 :return: A new ``Retry`` object. 21:02:04 """ 21:02:04 if self.total is False and error: 21:02:04 # Disabled, indicate to re-raise the error. 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 21:02:04 total = self.total 21:02:04 if total is not None: 21:02:04 total -= 1 21:02:04 21:02:04 connect = self.connect 21:02:04 read = self.read 21:02:04 redirect = self.redirect 21:02:04 status_count = self.status 21:02:04 other = self.other 21:02:04 cause = "unknown" 21:02:04 status = None 21:02:04 redirect_location = None 21:02:04 21:02:04 if error and self._is_connection_error(error): 21:02:04 # Connect retry? 21:02:04 if connect is False: 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif connect is not None: 21:02:04 connect -= 1 21:02:04 21:02:04 elif error and self._is_read_error(error): 21:02:04 # Read retry? 21:02:04 if read is False or method is None or not self._is_method_retryable(method): 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif read is not None: 21:02:04 read -= 1 21:02:04 21:02:04 elif error: 21:02:04 # Other retry? 21:02:04 if other is not None: 21:02:04 other -= 1 21:02:04 21:02:04 elif response and response.get_redirect_location(): 21:02:04 # Redirect retry? 21:02:04 if redirect is not None: 21:02:04 redirect -= 1 21:02:04 cause = "too many redirects" 21:02:04 response_redirect_location = response.get_redirect_location() 21:02:04 if response_redirect_location: 21:02:04 redirect_location = response_redirect_location 21:02:04 status = response.status 21:02:04 21:02:04 else: 21:02:04 # Incrementing because of a server error like a 500 in 21:02:04 # status_forcelist and the given method is in the allowed_methods 21:02:04 cause = ResponseError.GENERIC_ERROR 21:02:04 if response and response.status: 21:02:04 if status_count is not None: 21:02:04 status_count -= 1 21:02:04 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 21:02:04 status = response.status 21:02:04 21:02:04 history = self.history + ( 21:02:04 RequestHistory(method, url, error, status, redirect_location), 21:02:04 ) 21:02:04 21:02:04 new_retry = self.new( 21:02:04 total=total, 21:02:04 connect=connect, 21:02:04 read=read, 21:02:04 redirect=redirect, 21:02:04 status=status_count, 21:02:04 other=other, 21:02:04 history=history, 21:02:04 ) 21:02:04 21:02:04 if new_retry.is_exhausted(): 21:02:04 reason = error or ResponseError(cause) 21:02:04 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 21:02:04 21:02:04 During handling of the above exception, another exception occurred: 21:02:04 21:02:04 self = 21:02:04 21:02:04 def test_16_xpdr_device_disconnection(self): 21:02:04 > response = test_utils.unmount_device("XPDRA01") 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 21:02:04 transportpce_tests/1.2.1/test01_portmapping.py:193: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 transportpce_tests/common/test_utils.py:398: in unmount_device 21:02:04 response = delete_request(url[RESTCONF_VERSION].format('{}', node)) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 transportpce_tests/common/test_utils.py:134: in delete_request 21:02:04 return requests.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 21:02:04 return session.request(method=method, url=url, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 21:02:04 resp = self.send(prep, **send_kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 21:02:04 r = adapter.send(request, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 except (ProtocolError, OSError) as err: 21:02:04 raise ConnectionError(err, request=request) 21:02:04 21:02:04 except MaxRetryError as e: 21:02:04 if isinstance(e.reason, ConnectTimeoutError): 21:02:04 # TODO: Remove this in 3.0.0: see #2811 21:02:04 if not isinstance(e.reason, NewConnectionError): 21:02:04 raise ConnectTimeout(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, ResponseError): 21:02:04 raise RetryError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _ProxyError): 21:02:04 raise ProxyError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _SSLError): 21:02:04 # This branch is for urllib3 v1.22 and later. 21:02:04 raise SSLError(e, request=request) 21:02:04 21:02:04 > raise ConnectionError(e, request=request) 21:02:04 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 21:02:04 ----------------------------- Captured stdout call ----------------------------- 21:02:04 execution of test_16_xpdr_device_disconnection 21:02:04 _________ TestTransportPCEPortmapping.test_17_xpdr_device_disconnected _________ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 > sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 21:02:04 raise err 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 address = ('localhost', 8191), timeout = 30, source_address = None 21:02:04 socket_options = [(6, 1, 1)] 21:02:04 21:02:04 def create_connection( 21:02:04 address: tuple[str, int], 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 source_address: tuple[str, int] | None = None, 21:02:04 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 21:02:04 ) -> socket.socket: 21:02:04 """Connect to *address* and return the socket object. 21:02:04 21:02:04 Convenience function. Connect to *address* (a 2-tuple ``(host, 21:02:04 port)``) and return the socket object. Passing the optional 21:02:04 *timeout* parameter will set the timeout on the socket instance 21:02:04 before attempting to connect. If no *timeout* is supplied, the 21:02:04 global default timeout setting returned by :func:`socket.getdefaulttimeout` 21:02:04 is used. If *source_address* is set it must be a tuple of (host, port) 21:02:04 for the socket to bind as a source address before making the connection. 21:02:04 An host of '' or port 0 tells the OS to use the default. 21:02:04 """ 21:02:04 21:02:04 host, port = address 21:02:04 if host.startswith("["): 21:02:04 host = host.strip("[]") 21:02:04 err = None 21:02:04 21:02:04 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 21:02:04 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 21:02:04 # The original create_connection function always returns all records. 21:02:04 family = allowed_gai_family() 21:02:04 21:02:04 try: 21:02:04 host.encode("idna") 21:02:04 except UnicodeError: 21:02:04 raise LocationParseError(f"'{host}', label empty or too long") from None 21:02:04 21:02:04 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 21:02:04 af, socktype, proto, canonname, sa = res 21:02:04 sock = None 21:02:04 try: 21:02:04 sock = socket.socket(af, socktype, proto) 21:02:04 21:02:04 # If provided, set socket level options before connecting. 21:02:04 _set_socket_options(sock, socket_options) 21:02:04 21:02:04 if timeout is not _DEFAULT_TIMEOUT: 21:02:04 sock.settimeout(timeout) 21:02:04 if source_address: 21:02:04 sock.bind(source_address) 21:02:04 > sock.connect(sa) 21:02:04 E ConnectionRefusedError: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig' 21:02:04 body = None 21:02:04 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 21:02:04 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 redirect = False, assert_same_host = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 21:02:04 release_conn = False, chunked = False, body_pos = None, preload_content = False 21:02:04 decode_content = False, response_kw = {} 21:02:04 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01', query='content=nonconfig', fragment=None) 21:02:04 destination_scheme = None, conn = None, release_this_conn = True 21:02:04 http_tunnel_required = False, err = None, clean_exit = False 21:02:04 21:02:04 def urlopen( # type: ignore[override] 21:02:04 self, 21:02:04 method: str, 21:02:04 url: str, 21:02:04 body: _TYPE_BODY | None = None, 21:02:04 headers: typing.Mapping[str, str] | None = None, 21:02:04 retries: Retry | bool | int | None = None, 21:02:04 redirect: bool = True, 21:02:04 assert_same_host: bool = True, 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 pool_timeout: int | None = None, 21:02:04 release_conn: bool | None = None, 21:02:04 chunked: bool = False, 21:02:04 body_pos: _TYPE_BODY_POSITION | None = None, 21:02:04 preload_content: bool = True, 21:02:04 decode_content: bool = True, 21:02:04 **response_kw: typing.Any, 21:02:04 ) -> BaseHTTPResponse: 21:02:04 """ 21:02:04 Get a connection from the pool and perform an HTTP request. This is the 21:02:04 lowest level call for making a request, so you'll need to specify all 21:02:04 the raw details. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 More commonly, it's appropriate to use a convenience method 21:02:04 such as :meth:`request`. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 `release_conn` will only behave as expected if 21:02:04 `preload_content=False` because we want to make 21:02:04 `preload_content=False` the default behaviour someday soon without 21:02:04 breaking backwards compatibility. 21:02:04 21:02:04 :param method: 21:02:04 HTTP request method (such as GET, POST, PUT, etc.) 21:02:04 21:02:04 :param url: 21:02:04 The URL to perform the request on. 21:02:04 21:02:04 :param body: 21:02:04 Data to send in the request body, either :class:`str`, :class:`bytes`, 21:02:04 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 21:02:04 21:02:04 :param headers: 21:02:04 Dictionary of custom headers to send, such as User-Agent, 21:02:04 If-None-Match, etc. If None, pool headers are used. If provided, 21:02:04 these headers completely replace any pool-specific headers. 21:02:04 21:02:04 :param retries: 21:02:04 Configure the number of retries to allow before raising a 21:02:04 :class:`~urllib3.exceptions.MaxRetryError` exception. 21:02:04 21:02:04 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 21:02:04 :class:`~urllib3.util.retry.Retry` object for fine-grained control 21:02:04 over different types of retries. 21:02:04 Pass an integer number to retry connection errors that many times, 21:02:04 but no other types of errors. Pass zero to never retry. 21:02:04 21:02:04 If ``False``, then retries are disabled and any exception is raised 21:02:04 immediately. Also, instead of raising a MaxRetryError on redirects, 21:02:04 the redirect response will be returned. 21:02:04 21:02:04 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 21:02:04 21:02:04 :param redirect: 21:02:04 If True, automatically handle redirects (status codes 301, 302, 21:02:04 303, 307, 308). Each redirect counts as a retry. Disabling retries 21:02:04 will disable redirect, too. 21:02:04 21:02:04 :param assert_same_host: 21:02:04 If ``True``, will make sure that the host of the pool requests is 21:02:04 consistent else will raise HostChangedError. When ``False``, you can 21:02:04 use the pool on an HTTP proxy and request foreign hosts. 21:02:04 21:02:04 :param timeout: 21:02:04 If specified, overrides the default timeout for this one 21:02:04 request. It may be a float (in seconds) or an instance of 21:02:04 :class:`urllib3.util.Timeout`. 21:02:04 21:02:04 :param pool_timeout: 21:02:04 If set and the pool is set to block=True, then this method will 21:02:04 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 21:02:04 connection is available within the time period. 21:02:04 21:02:04 :param bool preload_content: 21:02:04 If True, the response's body will be preloaded into memory. 21:02:04 21:02:04 :param bool decode_content: 21:02:04 If True, will attempt to decode the body based on the 21:02:04 'content-encoding' header. 21:02:04 21:02:04 :param release_conn: 21:02:04 If False, then the urlopen call will not release the connection 21:02:04 back into the pool once a response is received (but will release if 21:02:04 you read the entire contents of the response such as when 21:02:04 `preload_content=True`). This is useful if you're not preloading 21:02:04 the response's content immediately. You will need to call 21:02:04 ``r.release_conn()`` on the response ``r`` to return the connection 21:02:04 back into the pool. If None, it takes the value of ``preload_content`` 21:02:04 which defaults to ``True``. 21:02:04 21:02:04 :param bool chunked: 21:02:04 If True, urllib3 will send the body using chunked transfer 21:02:04 encoding. Otherwise, urllib3 will send the body using the standard 21:02:04 content-length form. Defaults to False. 21:02:04 21:02:04 :param int body_pos: 21:02:04 Position to seek to in file-like body in the event of a retry or 21:02:04 redirect. Typically this won't need to be set because urllib3 will 21:02:04 auto-populate the value when needed. 21:02:04 """ 21:02:04 parsed_url = parse_url(url) 21:02:04 destination_scheme = parsed_url.scheme 21:02:04 21:02:04 if headers is None: 21:02:04 headers = self.headers 21:02:04 21:02:04 if not isinstance(retries, Retry): 21:02:04 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 21:02:04 21:02:04 if release_conn is None: 21:02:04 release_conn = preload_content 21:02:04 21:02:04 # Check host 21:02:04 if assert_same_host and not self.is_same_host(url): 21:02:04 raise HostChangedError(self, url, retries) 21:02:04 21:02:04 # Ensure that the URL we're connecting to is properly encoded 21:02:04 if url.startswith("/"): 21:02:04 url = to_str(_encode_target(url)) 21:02:04 else: 21:02:04 url = to_str(parsed_url.url) 21:02:04 21:02:04 conn = None 21:02:04 21:02:04 # Track whether `conn` needs to be released before 21:02:04 # returning/raising/recursing. Update this variable if necessary, and 21:02:04 # leave `release_conn` constant throughout the function. That way, if 21:02:04 # the function recurses, the original value of `release_conn` will be 21:02:04 # passed down into the recursive call, and its value will be respected. 21:02:04 # 21:02:04 # See issue #651 [1] for details. 21:02:04 # 21:02:04 # [1] 21:02:04 release_this_conn = release_conn 21:02:04 21:02:04 http_tunnel_required = connection_requires_http_tunnel( 21:02:04 self.proxy, self.proxy_config, destination_scheme 21:02:04 ) 21:02:04 21:02:04 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 21:02:04 # have to copy the headers dict so we can safely change it without those 21:02:04 # changes being reflected in anyone else's copy. 21:02:04 if not http_tunnel_required: 21:02:04 headers = headers.copy() # type: ignore[attr-defined] 21:02:04 headers.update(self.proxy_headers) # type: ignore[union-attr] 21:02:04 21:02:04 # Must keep the exception bound to a separate variable or else Python 3 21:02:04 # complains about UnboundLocalError. 21:02:04 err = None 21:02:04 21:02:04 # Keep track of whether we cleanly exited the except block. This 21:02:04 # ensures we do proper cleanup in finally. 21:02:04 clean_exit = False 21:02:04 21:02:04 # Rewind body position, if needed. Record current position 21:02:04 # for future rewinds in the event of a redirect/retry. 21:02:04 body_pos = set_file_position(body, body_pos) 21:02:04 21:02:04 try: 21:02:04 # Request a connection from the queue. 21:02:04 timeout_obj = self._get_timeout(timeout) 21:02:04 conn = self._get_conn(timeout=pool_timeout) 21:02:04 21:02:04 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 21:02:04 21:02:04 # Is this a closed/new connection that requires CONNECT tunnelling? 21:02:04 if self.proxy is not None and http_tunnel_required and conn.is_closed: 21:02:04 try: 21:02:04 self._prepare_proxy(conn) 21:02:04 except (BaseSSLError, OSError, SocketTimeout) as e: 21:02:04 self._raise_timeout( 21:02:04 err=e, url=self.proxy.url, timeout_value=conn.timeout 21:02:04 ) 21:02:04 raise 21:02:04 21:02:04 # If we're going to release the connection in ``finally:``, then 21:02:04 # the response doesn't need to know about the connection. Otherwise 21:02:04 # it will also try to release it and we'll have a double-release 21:02:04 # mess. 21:02:04 response_conn = conn if not release_conn else None 21:02:04 21:02:04 # Make the request on the HTTPConnection object 21:02:04 > response = self._make_request( 21:02:04 conn, 21:02:04 method, 21:02:04 url, 21:02:04 timeout=timeout_obj, 21:02:04 body=body, 21:02:04 headers=headers, 21:02:04 chunked=chunked, 21:02:04 retries=retries, 21:02:04 response_conn=response_conn, 21:02:04 preload_content=preload_content, 21:02:04 decode_content=decode_content, 21:02:04 **response_kw, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 21:02:04 conn.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 21:02:04 self.endheaders() 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 21:02:04 self._send_output(message_body, encode_chunked=encode_chunked) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 21:02:04 self.send(msg) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 21:02:04 self.connect() 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 21:02:04 self.sock = self._new_conn() 21:02:04 ^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 except socket.gaierror as e: 21:02:04 raise NameResolutionError(self.host, self, e) from e 21:02:04 except SocketTimeout as e: 21:02:04 raise ConnectTimeoutError( 21:02:04 self, 21:02:04 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 21:02:04 ) from e 21:02:04 21:02:04 except OSError as e: 21:02:04 > raise NewConnectionError( 21:02:04 self, f"Failed to establish a new connection: {e}" 21:02:04 ) from e 21:02:04 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 > resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 21:02:04 retries = retries.increment( 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig' 21:02:04 response = None 21:02:04 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 21:02:04 _pool = 21:02:04 _stacktrace = 21:02:04 21:02:04 def increment( 21:02:04 self, 21:02:04 method: str | None = None, 21:02:04 url: str | None = None, 21:02:04 response: BaseHTTPResponse | None = None, 21:02:04 error: Exception | None = None, 21:02:04 _pool: ConnectionPool | None = None, 21:02:04 _stacktrace: TracebackType | None = None, 21:02:04 ) -> Self: 21:02:04 """Return a new Retry object with incremented retry counters. 21:02:04 21:02:04 :param response: A response object, or None, if the server did not 21:02:04 return a response. 21:02:04 :type response: :class:`~urllib3.response.BaseHTTPResponse` 21:02:04 :param Exception error: An error encountered during the request, or 21:02:04 None if the response was received successfully. 21:02:04 21:02:04 :return: A new ``Retry`` object. 21:02:04 """ 21:02:04 if self.total is False and error: 21:02:04 # Disabled, indicate to re-raise the error. 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 21:02:04 total = self.total 21:02:04 if total is not None: 21:02:04 total -= 1 21:02:04 21:02:04 connect = self.connect 21:02:04 read = self.read 21:02:04 redirect = self.redirect 21:02:04 status_count = self.status 21:02:04 other = self.other 21:02:04 cause = "unknown" 21:02:04 status = None 21:02:04 redirect_location = None 21:02:04 21:02:04 if error and self._is_connection_error(error): 21:02:04 # Connect retry? 21:02:04 if connect is False: 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif connect is not None: 21:02:04 connect -= 1 21:02:04 21:02:04 elif error and self._is_read_error(error): 21:02:04 # Read retry? 21:02:04 if read is False or method is None or not self._is_method_retryable(method): 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif read is not None: 21:02:04 read -= 1 21:02:04 21:02:04 elif error: 21:02:04 # Other retry? 21:02:04 if other is not None: 21:02:04 other -= 1 21:02:04 21:02:04 elif response and response.get_redirect_location(): 21:02:04 # Redirect retry? 21:02:04 if redirect is not None: 21:02:04 redirect -= 1 21:02:04 cause = "too many redirects" 21:02:04 response_redirect_location = response.get_redirect_location() 21:02:04 if response_redirect_location: 21:02:04 redirect_location = response_redirect_location 21:02:04 status = response.status 21:02:04 21:02:04 else: 21:02:04 # Incrementing because of a server error like a 500 in 21:02:04 # status_forcelist and the given method is in the allowed_methods 21:02:04 cause = ResponseError.GENERIC_ERROR 21:02:04 if response and response.status: 21:02:04 if status_count is not None: 21:02:04 status_count -= 1 21:02:04 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 21:02:04 status = response.status 21:02:04 21:02:04 history = self.history + ( 21:02:04 RequestHistory(method, url, error, status, redirect_location), 21:02:04 ) 21:02:04 21:02:04 new_retry = self.new( 21:02:04 total=total, 21:02:04 connect=connect, 21:02:04 read=read, 21:02:04 redirect=redirect, 21:02:04 status=status_count, 21:02:04 other=other, 21:02:04 history=history, 21:02:04 ) 21:02:04 21:02:04 if new_retry.is_exhausted(): 21:02:04 reason = error or ResponseError(cause) 21:02:04 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 21:02:04 21:02:04 During handling of the above exception, another exception occurred: 21:02:04 21:02:04 self = 21:02:04 21:02:04 def test_17_xpdr_device_disconnected(self): 21:02:04 > response = test_utils.check_device_connection("XPDRA01") 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 21:02:04 transportpce_tests/1.2.1/test01_portmapping.py:197: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 transportpce_tests/common/test_utils.py:409: in check_device_connection 21:02:04 response = get_request(url[RESTCONF_VERSION].format('{}', node)) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 transportpce_tests/common/test_utils.py:117: in get_request 21:02:04 return requests.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 21:02:04 return session.request(method=method, url=url, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 21:02:04 resp = self.send(prep, **send_kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 21:02:04 r = adapter.send(request, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 except (ProtocolError, OSError) as err: 21:02:04 raise ConnectionError(err, request=request) 21:02:04 21:02:04 except MaxRetryError as e: 21:02:04 if isinstance(e.reason, ConnectTimeoutError): 21:02:04 # TODO: Remove this in 3.0.0: see #2811 21:02:04 if not isinstance(e.reason, NewConnectionError): 21:02:04 raise ConnectTimeout(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, ResponseError): 21:02:04 raise RetryError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _ProxyError): 21:02:04 raise ProxyError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _SSLError): 21:02:04 # This branch is for urllib3 v1.22 and later. 21:02:04 raise SSLError(e, request=request) 21:02:04 21:02:04 > raise ConnectionError(e, request=request) 21:02:04 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 21:02:04 ----------------------------- Captured stdout call ----------------------------- 21:02:04 execution of test_17_xpdr_device_disconnected 21:02:04 ________ TestTransportPCEPortmapping.test_18_xpdr_device_not_connected _________ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 > sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 21:02:04 raise err 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 address = ('localhost', 8191), timeout = 30, source_address = None 21:02:04 socket_options = [(6, 1, 1)] 21:02:04 21:02:04 def create_connection( 21:02:04 address: tuple[str, int], 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 source_address: tuple[str, int] | None = None, 21:02:04 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 21:02:04 ) -> socket.socket: 21:02:04 """Connect to *address* and return the socket object. 21:02:04 21:02:04 Convenience function. Connect to *address* (a 2-tuple ``(host, 21:02:04 port)``) and return the socket object. Passing the optional 21:02:04 *timeout* parameter will set the timeout on the socket instance 21:02:04 before attempting to connect. If no *timeout* is supplied, the 21:02:04 global default timeout setting returned by :func:`socket.getdefaulttimeout` 21:02:04 is used. If *source_address* is set it must be a tuple of (host, port) 21:02:04 for the socket to bind as a source address before making the connection. 21:02:04 An host of '' or port 0 tells the OS to use the default. 21:02:04 """ 21:02:04 21:02:04 host, port = address 21:02:04 if host.startswith("["): 21:02:04 host = host.strip("[]") 21:02:04 err = None 21:02:04 21:02:04 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 21:02:04 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 21:02:04 # The original create_connection function always returns all records. 21:02:04 family = allowed_gai_family() 21:02:04 21:02:04 try: 21:02:04 host.encode("idna") 21:02:04 except UnicodeError: 21:02:04 raise LocationParseError(f"'{host}', label empty or too long") from None 21:02:04 21:02:04 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 21:02:04 af, socktype, proto, canonname, sa = res 21:02:04 sock = None 21:02:04 try: 21:02:04 sock = socket.socket(af, socktype, proto) 21:02:04 21:02:04 # If provided, set socket level options before connecting. 21:02:04 _set_socket_options(sock, socket_options) 21:02:04 21:02:04 if timeout is not _DEFAULT_TIMEOUT: 21:02:04 sock.settimeout(timeout) 21:02:04 if source_address: 21:02:04 sock.bind(source_address) 21:02:04 > sock.connect(sa) 21:02:04 E ConnectionRefusedError: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info' 21:02:04 body = None 21:02:04 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 21:02:04 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 redirect = False, assert_same_host = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 21:02:04 release_conn = False, chunked = False, body_pos = None, preload_content = False 21:02:04 decode_content = False, response_kw = {} 21:02:04 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info', query=None, fragment=None) 21:02:04 destination_scheme = None, conn = None, release_this_conn = True 21:02:04 http_tunnel_required = False, err = None, clean_exit = False 21:02:04 21:02:04 def urlopen( # type: ignore[override] 21:02:04 self, 21:02:04 method: str, 21:02:04 url: str, 21:02:04 body: _TYPE_BODY | None = None, 21:02:04 headers: typing.Mapping[str, str] | None = None, 21:02:04 retries: Retry | bool | int | None = None, 21:02:04 redirect: bool = True, 21:02:04 assert_same_host: bool = True, 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 pool_timeout: int | None = None, 21:02:04 release_conn: bool | None = None, 21:02:04 chunked: bool = False, 21:02:04 body_pos: _TYPE_BODY_POSITION | None = None, 21:02:04 preload_content: bool = True, 21:02:04 decode_content: bool = True, 21:02:04 **response_kw: typing.Any, 21:02:04 ) -> BaseHTTPResponse: 21:02:04 """ 21:02:04 Get a connection from the pool and perform an HTTP request. This is the 21:02:04 lowest level call for making a request, so you'll need to specify all 21:02:04 the raw details. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 More commonly, it's appropriate to use a convenience method 21:02:04 such as :meth:`request`. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 `release_conn` will only behave as expected if 21:02:04 `preload_content=False` because we want to make 21:02:04 `preload_content=False` the default behaviour someday soon without 21:02:04 breaking backwards compatibility. 21:02:04 21:02:04 :param method: 21:02:04 HTTP request method (such as GET, POST, PUT, etc.) 21:02:04 21:02:04 :param url: 21:02:04 The URL to perform the request on. 21:02:04 21:02:04 :param body: 21:02:04 Data to send in the request body, either :class:`str`, :class:`bytes`, 21:02:04 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 21:02:04 21:02:04 :param headers: 21:02:04 Dictionary of custom headers to send, such as User-Agent, 21:02:04 If-None-Match, etc. If None, pool headers are used. If provided, 21:02:04 these headers completely replace any pool-specific headers. 21:02:04 21:02:04 :param retries: 21:02:04 Configure the number of retries to allow before raising a 21:02:04 :class:`~urllib3.exceptions.MaxRetryError` exception. 21:02:04 21:02:04 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 21:02:04 :class:`~urllib3.util.retry.Retry` object for fine-grained control 21:02:04 over different types of retries. 21:02:04 Pass an integer number to retry connection errors that many times, 21:02:04 but no other types of errors. Pass zero to never retry. 21:02:04 21:02:04 If ``False``, then retries are disabled and any exception is raised 21:02:04 immediately. Also, instead of raising a MaxRetryError on redirects, 21:02:04 the redirect response will be returned. 21:02:04 21:02:04 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 21:02:04 21:02:04 :param redirect: 21:02:04 If True, automatically handle redirects (status codes 301, 302, 21:02:04 303, 307, 308). Each redirect counts as a retry. Disabling retries 21:02:04 will disable redirect, too. 21:02:04 21:02:04 :param assert_same_host: 21:02:04 If ``True``, will make sure that the host of the pool requests is 21:02:04 consistent else will raise HostChangedError. When ``False``, you can 21:02:04 use the pool on an HTTP proxy and request foreign hosts. 21:02:04 21:02:04 :param timeout: 21:02:04 If specified, overrides the default timeout for this one 21:02:04 request. It may be a float (in seconds) or an instance of 21:02:04 :class:`urllib3.util.Timeout`. 21:02:04 21:02:04 :param pool_timeout: 21:02:04 If set and the pool is set to block=True, then this method will 21:02:04 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 21:02:04 connection is available within the time period. 21:02:04 21:02:04 :param bool preload_content: 21:02:04 If True, the response's body will be preloaded into memory. 21:02:04 21:02:04 :param bool decode_content: 21:02:04 If True, will attempt to decode the body based on the 21:02:04 'content-encoding' header. 21:02:04 21:02:04 :param release_conn: 21:02:04 If False, then the urlopen call will not release the connection 21:02:04 back into the pool once a response is received (but will release if 21:02:04 you read the entire contents of the response such as when 21:02:04 `preload_content=True`). This is useful if you're not preloading 21:02:04 the response's content immediately. You will need to call 21:02:04 ``r.release_conn()`` on the response ``r`` to return the connection 21:02:04 back into the pool. If None, it takes the value of ``preload_content`` 21:02:04 which defaults to ``True``. 21:02:04 21:02:04 :param bool chunked: 21:02:04 If True, urllib3 will send the body using chunked transfer 21:02:04 encoding. Otherwise, urllib3 will send the body using the standard 21:02:04 content-length form. Defaults to False. 21:02:04 21:02:04 :param int body_pos: 21:02:04 Position to seek to in file-like body in the event of a retry or 21:02:04 redirect. Typically this won't need to be set because urllib3 will 21:02:04 auto-populate the value when needed. 21:02:04 """ 21:02:04 parsed_url = parse_url(url) 21:02:04 destination_scheme = parsed_url.scheme 21:02:04 21:02:04 if headers is None: 21:02:04 headers = self.headers 21:02:04 21:02:04 if not isinstance(retries, Retry): 21:02:04 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 21:02:04 21:02:04 if release_conn is None: 21:02:04 release_conn = preload_content 21:02:04 21:02:04 # Check host 21:02:04 if assert_same_host and not self.is_same_host(url): 21:02:04 raise HostChangedError(self, url, retries) 21:02:04 21:02:04 # Ensure that the URL we're connecting to is properly encoded 21:02:04 if url.startswith("/"): 21:02:04 url = to_str(_encode_target(url)) 21:02:04 else: 21:02:04 url = to_str(parsed_url.url) 21:02:04 21:02:04 conn = None 21:02:04 21:02:04 # Track whether `conn` needs to be released before 21:02:04 # returning/raising/recursing. Update this variable if necessary, and 21:02:04 # leave `release_conn` constant throughout the function. That way, if 21:02:04 # the function recurses, the original value of `release_conn` will be 21:02:04 # passed down into the recursive call, and its value will be respected. 21:02:04 # 21:02:04 # See issue #651 [1] for details. 21:02:04 # 21:02:04 # [1] 21:02:04 release_this_conn = release_conn 21:02:04 21:02:04 http_tunnel_required = connection_requires_http_tunnel( 21:02:04 self.proxy, self.proxy_config, destination_scheme 21:02:04 ) 21:02:04 21:02:04 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 21:02:04 # have to copy the headers dict so we can safely change it without those 21:02:04 # changes being reflected in anyone else's copy. 21:02:04 if not http_tunnel_required: 21:02:04 headers = headers.copy() # type: ignore[attr-defined] 21:02:04 headers.update(self.proxy_headers) # type: ignore[union-attr] 21:02:04 21:02:04 # Must keep the exception bound to a separate variable or else Python 3 21:02:04 # complains about UnboundLocalError. 21:02:04 err = None 21:02:04 21:02:04 # Keep track of whether we cleanly exited the except block. This 21:02:04 # ensures we do proper cleanup in finally. 21:02:04 clean_exit = False 21:02:04 21:02:04 # Rewind body position, if needed. Record current position 21:02:04 # for future rewinds in the event of a redirect/retry. 21:02:04 body_pos = set_file_position(body, body_pos) 21:02:04 21:02:04 try: 21:02:04 # Request a connection from the queue. 21:02:04 timeout_obj = self._get_timeout(timeout) 21:02:04 conn = self._get_conn(timeout=pool_timeout) 21:02:04 21:02:04 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 21:02:04 21:02:04 # Is this a closed/new connection that requires CONNECT tunnelling? 21:02:04 if self.proxy is not None and http_tunnel_required and conn.is_closed: 21:02:04 try: 21:02:04 self._prepare_proxy(conn) 21:02:04 except (BaseSSLError, OSError, SocketTimeout) as e: 21:02:04 self._raise_timeout( 21:02:04 err=e, url=self.proxy.url, timeout_value=conn.timeout 21:02:04 ) 21:02:04 raise 21:02:04 21:02:04 # If we're going to release the connection in ``finally:``, then 21:02:04 # the response doesn't need to know about the connection. Otherwise 21:02:04 # it will also try to release it and we'll have a double-release 21:02:04 # mess. 21:02:04 response_conn = conn if not release_conn else None 21:02:04 21:02:04 # Make the request on the HTTPConnection object 21:02:04 > response = self._make_request( 21:02:04 conn, 21:02:04 method, 21:02:04 url, 21:02:04 timeout=timeout_obj, 21:02:04 body=body, 21:02:04 headers=headers, 21:02:04 chunked=chunked, 21:02:04 retries=retries, 21:02:04 response_conn=response_conn, 21:02:04 preload_content=preload_content, 21:02:04 decode_content=decode_content, 21:02:04 **response_kw, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 21:02:04 conn.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 21:02:04 self.endheaders() 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 21:02:04 self._send_output(message_body, encode_chunked=encode_chunked) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 21:02:04 self.send(msg) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 21:02:04 self.connect() 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 21:02:04 self.sock = self._new_conn() 21:02:04 ^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 except socket.gaierror as e: 21:02:04 raise NameResolutionError(self.host, self, e) from e 21:02:04 except SocketTimeout as e: 21:02:04 raise ConnectTimeoutError( 21:02:04 self, 21:02:04 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 21:02:04 ) from e 21:02:04 21:02:04 except OSError as e: 21:02:04 > raise NewConnectionError( 21:02:04 self, f"Failed to establish a new connection: {e}" 21:02:04 ) from e 21:02:04 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 > resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 21:02:04 retries = retries.increment( 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info' 21:02:04 response = None 21:02:04 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 21:02:04 _pool = 21:02:04 _stacktrace = 21:02:04 21:02:04 def increment( 21:02:04 self, 21:02:04 method: str | None = None, 21:02:04 url: str | None = None, 21:02:04 response: BaseHTTPResponse | None = None, 21:02:04 error: Exception | None = None, 21:02:04 _pool: ConnectionPool | None = None, 21:02:04 _stacktrace: TracebackType | None = None, 21:02:04 ) -> Self: 21:02:04 """Return a new Retry object with incremented retry counters. 21:02:04 21:02:04 :param response: A response object, or None, if the server did not 21:02:04 return a response. 21:02:04 :type response: :class:`~urllib3.response.BaseHTTPResponse` 21:02:04 :param Exception error: An error encountered during the request, or 21:02:04 None if the response was received successfully. 21:02:04 21:02:04 :return: A new ``Retry`` object. 21:02:04 """ 21:02:04 if self.total is False and error: 21:02:04 # Disabled, indicate to re-raise the error. 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 21:02:04 total = self.total 21:02:04 if total is not None: 21:02:04 total -= 1 21:02:04 21:02:04 connect = self.connect 21:02:04 read = self.read 21:02:04 redirect = self.redirect 21:02:04 status_count = self.status 21:02:04 other = self.other 21:02:04 cause = "unknown" 21:02:04 status = None 21:02:04 redirect_location = None 21:02:04 21:02:04 if error and self._is_connection_error(error): 21:02:04 # Connect retry? 21:02:04 if connect is False: 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif connect is not None: 21:02:04 connect -= 1 21:02:04 21:02:04 elif error and self._is_read_error(error): 21:02:04 # Read retry? 21:02:04 if read is False or method is None or not self._is_method_retryable(method): 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif read is not None: 21:02:04 read -= 1 21:02:04 21:02:04 elif error: 21:02:04 # Other retry? 21:02:04 if other is not None: 21:02:04 other -= 1 21:02:04 21:02:04 elif response and response.get_redirect_location(): 21:02:04 # Redirect retry? 21:02:04 if redirect is not None: 21:02:04 redirect -= 1 21:02:04 cause = "too many redirects" 21:02:04 response_redirect_location = response.get_redirect_location() 21:02:04 if response_redirect_location: 21:02:04 redirect_location = response_redirect_location 21:02:04 status = response.status 21:02:04 21:02:04 else: 21:02:04 # Incrementing because of a server error like a 500 in 21:02:04 # status_forcelist and the given method is in the allowed_methods 21:02:04 cause = ResponseError.GENERIC_ERROR 21:02:04 if response and response.status: 21:02:04 if status_count is not None: 21:02:04 status_count -= 1 21:02:04 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 21:02:04 status = response.status 21:02:04 21:02:04 history = self.history + ( 21:02:04 RequestHistory(method, url, error, status, redirect_location), 21:02:04 ) 21:02:04 21:02:04 new_retry = self.new( 21:02:04 total=total, 21:02:04 connect=connect, 21:02:04 read=read, 21:02:04 redirect=redirect, 21:02:04 status=status_count, 21:02:04 other=other, 21:02:04 history=history, 21:02:04 ) 21:02:04 21:02:04 if new_retry.is_exhausted(): 21:02:04 reason = error or ResponseError(cause) 21:02:04 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 21:02:04 21:02:04 During handling of the above exception, another exception occurred: 21:02:04 21:02:04 self = 21:02:04 21:02:04 def test_18_xpdr_device_not_connected(self): 21:02:04 > response = test_utils.get_portmapping_node_attr("XPDRA01", "node-info", None) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 21:02:04 transportpce_tests/1.2.1/test01_portmapping.py:205: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 21:02:04 response = get_request(target_url) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 transportpce_tests/common/test_utils.py:117: in get_request 21:02:04 return requests.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 21:02:04 return session.request(method=method, url=url, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 21:02:04 resp = self.send(prep, **send_kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 21:02:04 r = adapter.send(request, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 except (ProtocolError, OSError) as err: 21:02:04 raise ConnectionError(err, request=request) 21:02:04 21:02:04 except MaxRetryError as e: 21:02:04 if isinstance(e.reason, ConnectTimeoutError): 21:02:04 # TODO: Remove this in 3.0.0: see #2811 21:02:04 if not isinstance(e.reason, NewConnectionError): 21:02:04 raise ConnectTimeout(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, ResponseError): 21:02:04 raise RetryError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _ProxyError): 21:02:04 raise ProxyError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _SSLError): 21:02:04 # This branch is for urllib3 v1.22 and later. 21:02:04 raise SSLError(e, request=request) 21:02:04 21:02:04 > raise ConnectionError(e, request=request) 21:02:04 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 21:02:04 ----------------------------- Captured stdout call ----------------------------- 21:02:04 execution of test_18_xpdr_device_not_connected 21:02:04 _________ TestTransportPCEPortmapping.test_19_rdm_device_disconnection _________ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 > sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 21:02:04 raise err 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 address = ('localhost', 8191), timeout = 30, source_address = None 21:02:04 socket_options = [(6, 1, 1)] 21:02:04 21:02:04 def create_connection( 21:02:04 address: tuple[str, int], 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 source_address: tuple[str, int] | None = None, 21:02:04 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 21:02:04 ) -> socket.socket: 21:02:04 """Connect to *address* and return the socket object. 21:02:04 21:02:04 Convenience function. Connect to *address* (a 2-tuple ``(host, 21:02:04 port)``) and return the socket object. Passing the optional 21:02:04 *timeout* parameter will set the timeout on the socket instance 21:02:04 before attempting to connect. If no *timeout* is supplied, the 21:02:04 global default timeout setting returned by :func:`socket.getdefaulttimeout` 21:02:04 is used. If *source_address* is set it must be a tuple of (host, port) 21:02:04 for the socket to bind as a source address before making the connection. 21:02:04 An host of '' or port 0 tells the OS to use the default. 21:02:04 """ 21:02:04 21:02:04 host, port = address 21:02:04 if host.startswith("["): 21:02:04 host = host.strip("[]") 21:02:04 err = None 21:02:04 21:02:04 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 21:02:04 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 21:02:04 # The original create_connection function always returns all records. 21:02:04 family = allowed_gai_family() 21:02:04 21:02:04 try: 21:02:04 host.encode("idna") 21:02:04 except UnicodeError: 21:02:04 raise LocationParseError(f"'{host}', label empty or too long") from None 21:02:04 21:02:04 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 21:02:04 af, socktype, proto, canonname, sa = res 21:02:04 sock = None 21:02:04 try: 21:02:04 sock = socket.socket(af, socktype, proto) 21:02:04 21:02:04 # If provided, set socket level options before connecting. 21:02:04 _set_socket_options(sock, socket_options) 21:02:04 21:02:04 if timeout is not _DEFAULT_TIMEOUT: 21:02:04 sock.settimeout(timeout) 21:02:04 if source_address: 21:02:04 sock.bind(source_address) 21:02:04 > sock.connect(sa) 21:02:04 E ConnectionRefusedError: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 method = 'DELETE' 21:02:04 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01' 21:02:04 body = None 21:02:04 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 21:02:04 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 redirect = False, assert_same_host = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 21:02:04 release_conn = False, chunked = False, body_pos = None, preload_content = False 21:02:04 decode_content = False, response_kw = {} 21:02:04 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01', query=None, fragment=None) 21:02:04 destination_scheme = None, conn = None, release_this_conn = True 21:02:04 http_tunnel_required = False, err = None, clean_exit = False 21:02:04 21:02:04 def urlopen( # type: ignore[override] 21:02:04 self, 21:02:04 method: str, 21:02:04 url: str, 21:02:04 body: _TYPE_BODY | None = None, 21:02:04 headers: typing.Mapping[str, str] | None = None, 21:02:04 retries: Retry | bool | int | None = None, 21:02:04 redirect: bool = True, 21:02:04 assert_same_host: bool = True, 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 pool_timeout: int | None = None, 21:02:04 release_conn: bool | None = None, 21:02:04 chunked: bool = False, 21:02:04 body_pos: _TYPE_BODY_POSITION | None = None, 21:02:04 preload_content: bool = True, 21:02:04 decode_content: bool = True, 21:02:04 **response_kw: typing.Any, 21:02:04 ) -> BaseHTTPResponse: 21:02:04 """ 21:02:04 Get a connection from the pool and perform an HTTP request. This is the 21:02:04 lowest level call for making a request, so you'll need to specify all 21:02:04 the raw details. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 More commonly, it's appropriate to use a convenience method 21:02:04 such as :meth:`request`. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 `release_conn` will only behave as expected if 21:02:04 `preload_content=False` because we want to make 21:02:04 `preload_content=False` the default behaviour someday soon without 21:02:04 breaking backwards compatibility. 21:02:04 21:02:04 :param method: 21:02:04 HTTP request method (such as GET, POST, PUT, etc.) 21:02:04 21:02:04 :param url: 21:02:04 The URL to perform the request on. 21:02:04 21:02:04 :param body: 21:02:04 Data to send in the request body, either :class:`str`, :class:`bytes`, 21:02:04 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 21:02:04 21:02:04 :param headers: 21:02:04 Dictionary of custom headers to send, such as User-Agent, 21:02:04 If-None-Match, etc. If None, pool headers are used. If provided, 21:02:04 these headers completely replace any pool-specific headers. 21:02:04 21:02:04 :param retries: 21:02:04 Configure the number of retries to allow before raising a 21:02:04 :class:`~urllib3.exceptions.MaxRetryError` exception. 21:02:04 21:02:04 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 21:02:04 :class:`~urllib3.util.retry.Retry` object for fine-grained control 21:02:04 over different types of retries. 21:02:04 Pass an integer number to retry connection errors that many times, 21:02:04 but no other types of errors. Pass zero to never retry. 21:02:04 21:02:04 If ``False``, then retries are disabled and any exception is raised 21:02:04 immediately. Also, instead of raising a MaxRetryError on redirects, 21:02:04 the redirect response will be returned. 21:02:04 21:02:04 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 21:02:04 21:02:04 :param redirect: 21:02:04 If True, automatically handle redirects (status codes 301, 302, 21:02:04 303, 307, 308). Each redirect counts as a retry. Disabling retries 21:02:04 will disable redirect, too. 21:02:04 21:02:04 :param assert_same_host: 21:02:04 If ``True``, will make sure that the host of the pool requests is 21:02:04 consistent else will raise HostChangedError. When ``False``, you can 21:02:04 use the pool on an HTTP proxy and request foreign hosts. 21:02:04 21:02:04 :param timeout: 21:02:04 If specified, overrides the default timeout for this one 21:02:04 request. It may be a float (in seconds) or an instance of 21:02:04 :class:`urllib3.util.Timeout`. 21:02:04 21:02:04 :param pool_timeout: 21:02:04 If set and the pool is set to block=True, then this method will 21:02:04 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 21:02:04 connection is available within the time period. 21:02:04 21:02:04 :param bool preload_content: 21:02:04 If True, the response's body will be preloaded into memory. 21:02:04 21:02:04 :param bool decode_content: 21:02:04 If True, will attempt to decode the body based on the 21:02:04 'content-encoding' header. 21:02:04 21:02:04 :param release_conn: 21:02:04 If False, then the urlopen call will not release the connection 21:02:04 back into the pool once a response is received (but will release if 21:02:04 you read the entire contents of the response such as when 21:02:04 `preload_content=True`). This is useful if you're not preloading 21:02:04 the response's content immediately. You will need to call 21:02:04 ``r.release_conn()`` on the response ``r`` to return the connection 21:02:04 back into the pool. If None, it takes the value of ``preload_content`` 21:02:04 which defaults to ``True``. 21:02:04 21:02:04 :param bool chunked: 21:02:04 If True, urllib3 will send the body using chunked transfer 21:02:04 encoding. Otherwise, urllib3 will send the body using the standard 21:02:04 content-length form. Defaults to False. 21:02:04 21:02:04 :param int body_pos: 21:02:04 Position to seek to in file-like body in the event of a retry or 21:02:04 redirect. Typically this won't need to be set because urllib3 will 21:02:04 auto-populate the value when needed. 21:02:04 """ 21:02:04 parsed_url = parse_url(url) 21:02:04 destination_scheme = parsed_url.scheme 21:02:04 21:02:04 if headers is None: 21:02:04 headers = self.headers 21:02:04 21:02:04 if not isinstance(retries, Retry): 21:02:04 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 21:02:04 21:02:04 if release_conn is None: 21:02:04 release_conn = preload_content 21:02:04 21:02:04 # Check host 21:02:04 if assert_same_host and not self.is_same_host(url): 21:02:04 raise HostChangedError(self, url, retries) 21:02:04 21:02:04 # Ensure that the URL we're connecting to is properly encoded 21:02:04 if url.startswith("/"): 21:02:04 url = to_str(_encode_target(url)) 21:02:04 else: 21:02:04 url = to_str(parsed_url.url) 21:02:04 21:02:04 conn = None 21:02:04 21:02:04 # Track whether `conn` needs to be released before 21:02:04 # returning/raising/recursing. Update this variable if necessary, and 21:02:04 # leave `release_conn` constant throughout the function. That way, if 21:02:04 # the function recurses, the original value of `release_conn` will be 21:02:04 # passed down into the recursive call, and its value will be respected. 21:02:04 # 21:02:04 # See issue #651 [1] for details. 21:02:04 # 21:02:04 # [1] 21:02:04 release_this_conn = release_conn 21:02:04 21:02:04 http_tunnel_required = connection_requires_http_tunnel( 21:02:04 self.proxy, self.proxy_config, destination_scheme 21:02:04 ) 21:02:04 21:02:04 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 21:02:04 # have to copy the headers dict so we can safely change it without those 21:02:04 # changes being reflected in anyone else's copy. 21:02:04 if not http_tunnel_required: 21:02:04 headers = headers.copy() # type: ignore[attr-defined] 21:02:04 headers.update(self.proxy_headers) # type: ignore[union-attr] 21:02:04 21:02:04 # Must keep the exception bound to a separate variable or else Python 3 21:02:04 # complains about UnboundLocalError. 21:02:04 err = None 21:02:04 21:02:04 # Keep track of whether we cleanly exited the except block. This 21:02:04 # ensures we do proper cleanup in finally. 21:02:04 clean_exit = False 21:02:04 21:02:04 # Rewind body position, if needed. Record current position 21:02:04 # for future rewinds in the event of a redirect/retry. 21:02:04 body_pos = set_file_position(body, body_pos) 21:02:04 21:02:04 try: 21:02:04 # Request a connection from the queue. 21:02:04 timeout_obj = self._get_timeout(timeout) 21:02:04 conn = self._get_conn(timeout=pool_timeout) 21:02:04 21:02:04 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 21:02:04 21:02:04 # Is this a closed/new connection that requires CONNECT tunnelling? 21:02:04 if self.proxy is not None and http_tunnel_required and conn.is_closed: 21:02:04 try: 21:02:04 self._prepare_proxy(conn) 21:02:04 except (BaseSSLError, OSError, SocketTimeout) as e: 21:02:04 self._raise_timeout( 21:02:04 err=e, url=self.proxy.url, timeout_value=conn.timeout 21:02:04 ) 21:02:04 raise 21:02:04 21:02:04 # If we're going to release the connection in ``finally:``, then 21:02:04 # the response doesn't need to know about the connection. Otherwise 21:02:04 # it will also try to release it and we'll have a double-release 21:02:04 # mess. 21:02:04 response_conn = conn if not release_conn else None 21:02:04 21:02:04 # Make the request on the HTTPConnection object 21:02:04 > response = self._make_request( 21:02:04 conn, 21:02:04 method, 21:02:04 url, 21:02:04 timeout=timeout_obj, 21:02:04 body=body, 21:02:04 headers=headers, 21:02:04 chunked=chunked, 21:02:04 retries=retries, 21:02:04 response_conn=response_conn, 21:02:04 preload_content=preload_content, 21:02:04 decode_content=decode_content, 21:02:04 **response_kw, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 21:02:04 conn.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 21:02:04 self.endheaders() 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 21:02:04 self._send_output(message_body, encode_chunked=encode_chunked) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 21:02:04 self.send(msg) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 21:02:04 self.connect() 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 21:02:04 self.sock = self._new_conn() 21:02:04 ^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 except socket.gaierror as e: 21:02:04 raise NameResolutionError(self.host, self, e) from e 21:02:04 except SocketTimeout as e: 21:02:04 raise ConnectTimeoutError( 21:02:04 self, 21:02:04 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 21:02:04 ) from e 21:02:04 21:02:04 except OSError as e: 21:02:04 > raise NewConnectionError( 21:02:04 self, f"Failed to establish a new connection: {e}" 21:02:04 ) from e 21:02:04 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 > resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 21:02:04 retries = retries.increment( 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 method = 'DELETE' 21:02:04 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01' 21:02:04 response = None 21:02:04 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 21:02:04 _pool = 21:02:04 _stacktrace = 21:02:04 21:02:04 def increment( 21:02:04 self, 21:02:04 method: str | None = None, 21:02:04 url: str | None = None, 21:02:04 response: BaseHTTPResponse | None = None, 21:02:04 error: Exception | None = None, 21:02:04 _pool: ConnectionPool | None = None, 21:02:04 _stacktrace: TracebackType | None = None, 21:02:04 ) -> Self: 21:02:04 """Return a new Retry object with incremented retry counters. 21:02:04 21:02:04 :param response: A response object, or None, if the server did not 21:02:04 return a response. 21:02:04 :type response: :class:`~urllib3.response.BaseHTTPResponse` 21:02:04 :param Exception error: An error encountered during the request, or 21:02:04 None if the response was received successfully. 21:02:04 21:02:04 :return: A new ``Retry`` object. 21:02:04 """ 21:02:04 if self.total is False and error: 21:02:04 # Disabled, indicate to re-raise the error. 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 21:02:04 total = self.total 21:02:04 if total is not None: 21:02:04 total -= 1 21:02:04 21:02:04 connect = self.connect 21:02:04 read = self.read 21:02:04 redirect = self.redirect 21:02:04 status_count = self.status 21:02:04 other = self.other 21:02:04 cause = "unknown" 21:02:04 status = None 21:02:04 redirect_location = None 21:02:04 21:02:04 if error and self._is_connection_error(error): 21:02:04 # Connect retry? 21:02:04 if connect is False: 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif connect is not None: 21:02:04 connect -= 1 21:02:04 21:02:04 elif error and self._is_read_error(error): 21:02:04 # Read retry? 21:02:04 if read is False or method is None or not self._is_method_retryable(method): 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif read is not None: 21:02:04 read -= 1 21:02:04 21:02:04 elif error: 21:02:04 # Other retry? 21:02:04 if other is not None: 21:02:04 other -= 1 21:02:04 21:02:04 elif response and response.get_redirect_location(): 21:02:04 # Redirect retry? 21:02:04 if redirect is not None: 21:02:04 redirect -= 1 21:02:04 cause = "too many redirects" 21:02:04 response_redirect_location = response.get_redirect_location() 21:02:04 if response_redirect_location: 21:02:04 redirect_location = response_redirect_location 21:02:04 status = response.status 21:02:04 21:02:04 else: 21:02:04 # Incrementing because of a server error like a 500 in 21:02:04 # status_forcelist and the given method is in the allowed_methods 21:02:04 cause = ResponseError.GENERIC_ERROR 21:02:04 if response and response.status: 21:02:04 if status_count is not None: 21:02:04 status_count -= 1 21:02:04 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 21:02:04 status = response.status 21:02:04 21:02:04 history = self.history + ( 21:02:04 RequestHistory(method, url, error, status, redirect_location), 21:02:04 ) 21:02:04 21:02:04 new_retry = self.new( 21:02:04 total=total, 21:02:04 connect=connect, 21:02:04 read=read, 21:02:04 redirect=redirect, 21:02:04 status=status_count, 21:02:04 other=other, 21:02:04 history=history, 21:02:04 ) 21:02:04 21:02:04 if new_retry.is_exhausted(): 21:02:04 reason = error or ResponseError(cause) 21:02:04 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 21:02:04 21:02:04 During handling of the above exception, another exception occurred: 21:02:04 21:02:04 self = 21:02:04 21:02:04 def test_19_rdm_device_disconnection(self): 21:02:04 > response = test_utils.unmount_device("ROADMA01") 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 21:02:04 transportpce_tests/1.2.1/test01_portmapping.py:213: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 transportpce_tests/common/test_utils.py:398: in unmount_device 21:02:04 response = delete_request(url[RESTCONF_VERSION].format('{}', node)) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 transportpce_tests/common/test_utils.py:134: in delete_request 21:02:04 return requests.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 21:02:04 return session.request(method=method, url=url, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 21:02:04 resp = self.send(prep, **send_kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 21:02:04 r = adapter.send(request, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 except (ProtocolError, OSError) as err: 21:02:04 raise ConnectionError(err, request=request) 21:02:04 21:02:04 except MaxRetryError as e: 21:02:04 if isinstance(e.reason, ConnectTimeoutError): 21:02:04 # TODO: Remove this in 3.0.0: see #2811 21:02:04 if not isinstance(e.reason, NewConnectionError): 21:02:04 raise ConnectTimeout(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, ResponseError): 21:02:04 raise RetryError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _ProxyError): 21:02:04 raise ProxyError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _SSLError): 21:02:04 # This branch is for urllib3 v1.22 and later. 21:02:04 raise SSLError(e, request=request) 21:02:04 21:02:04 > raise ConnectionError(e, request=request) 21:02:04 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01 (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 21:02:04 ----------------------------- Captured stdout call ----------------------------- 21:02:04 execution of test_19_rdm_device_disconnection 21:02:04 _________ TestTransportPCEPortmapping.test_20_rdm_device_disconnected __________ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 > sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 21:02:04 raise err 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 address = ('localhost', 8191), timeout = 30, source_address = None 21:02:04 socket_options = [(6, 1, 1)] 21:02:04 21:02:04 def create_connection( 21:02:04 address: tuple[str, int], 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 source_address: tuple[str, int] | None = None, 21:02:04 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 21:02:04 ) -> socket.socket: 21:02:04 """Connect to *address* and return the socket object. 21:02:04 21:02:04 Convenience function. Connect to *address* (a 2-tuple ``(host, 21:02:04 port)``) and return the socket object. Passing the optional 21:02:04 *timeout* parameter will set the timeout on the socket instance 21:02:04 before attempting to connect. If no *timeout* is supplied, the 21:02:04 global default timeout setting returned by :func:`socket.getdefaulttimeout` 21:02:04 is used. If *source_address* is set it must be a tuple of (host, port) 21:02:04 for the socket to bind as a source address before making the connection. 21:02:04 An host of '' or port 0 tells the OS to use the default. 21:02:04 """ 21:02:04 21:02:04 host, port = address 21:02:04 if host.startswith("["): 21:02:04 host = host.strip("[]") 21:02:04 err = None 21:02:04 21:02:04 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 21:02:04 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 21:02:04 # The original create_connection function always returns all records. 21:02:04 family = allowed_gai_family() 21:02:04 21:02:04 try: 21:02:04 host.encode("idna") 21:02:04 except UnicodeError: 21:02:04 raise LocationParseError(f"'{host}', label empty or too long") from None 21:02:04 21:02:04 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 21:02:04 af, socktype, proto, canonname, sa = res 21:02:04 sock = None 21:02:04 try: 21:02:04 sock = socket.socket(af, socktype, proto) 21:02:04 21:02:04 # If provided, set socket level options before connecting. 21:02:04 _set_socket_options(sock, socket_options) 21:02:04 21:02:04 if timeout is not _DEFAULT_TIMEOUT: 21:02:04 sock.settimeout(timeout) 21:02:04 if source_address: 21:02:04 sock.bind(source_address) 21:02:04 > sock.connect(sa) 21:02:04 E ConnectionRefusedError: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig' 21:02:04 body = None 21:02:04 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 21:02:04 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 redirect = False, assert_same_host = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 21:02:04 release_conn = False, chunked = False, body_pos = None, preload_content = False 21:02:04 decode_content = False, response_kw = {} 21:02:04 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01', query='content=nonconfig', fragment=None) 21:02:04 destination_scheme = None, conn = None, release_this_conn = True 21:02:04 http_tunnel_required = False, err = None, clean_exit = False 21:02:04 21:02:04 def urlopen( # type: ignore[override] 21:02:04 self, 21:02:04 method: str, 21:02:04 url: str, 21:02:04 body: _TYPE_BODY | None = None, 21:02:04 headers: typing.Mapping[str, str] | None = None, 21:02:04 retries: Retry | bool | int | None = None, 21:02:04 redirect: bool = True, 21:02:04 assert_same_host: bool = True, 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 pool_timeout: int | None = None, 21:02:04 release_conn: bool | None = None, 21:02:04 chunked: bool = False, 21:02:04 body_pos: _TYPE_BODY_POSITION | None = None, 21:02:04 preload_content: bool = True, 21:02:04 decode_content: bool = True, 21:02:04 **response_kw: typing.Any, 21:02:04 ) -> BaseHTTPResponse: 21:02:04 """ 21:02:04 Get a connection from the pool and perform an HTTP request. This is the 21:02:04 lowest level call for making a request, so you'll need to specify all 21:02:04 the raw details. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 More commonly, it's appropriate to use a convenience method 21:02:04 such as :meth:`request`. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 `release_conn` will only behave as expected if 21:02:04 `preload_content=False` because we want to make 21:02:04 `preload_content=False` the default behaviour someday soon without 21:02:04 breaking backwards compatibility. 21:02:04 21:02:04 :param method: 21:02:04 HTTP request method (such as GET, POST, PUT, etc.) 21:02:04 21:02:04 :param url: 21:02:04 The URL to perform the request on. 21:02:04 21:02:04 :param body: 21:02:04 Data to send in the request body, either :class:`str`, :class:`bytes`, 21:02:04 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 21:02:04 21:02:04 :param headers: 21:02:04 Dictionary of custom headers to send, such as User-Agent, 21:02:04 If-None-Match, etc. If None, pool headers are used. If provided, 21:02:04 these headers completely replace any pool-specific headers. 21:02:04 21:02:04 :param retries: 21:02:04 Configure the number of retries to allow before raising a 21:02:04 :class:`~urllib3.exceptions.MaxRetryError` exception. 21:02:04 21:02:04 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 21:02:04 :class:`~urllib3.util.retry.Retry` object for fine-grained control 21:02:04 over different types of retries. 21:02:04 Pass an integer number to retry connection errors that many times, 21:02:04 but no other types of errors. Pass zero to never retry. 21:02:04 21:02:04 If ``False``, then retries are disabled and any exception is raised 21:02:04 immediately. Also, instead of raising a MaxRetryError on redirects, 21:02:04 the redirect response will be returned. 21:02:04 21:02:04 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 21:02:04 21:02:04 :param redirect: 21:02:04 If True, automatically handle redirects (status codes 301, 302, 21:02:04 303, 307, 308). Each redirect counts as a retry. Disabling retries 21:02:04 will disable redirect, too. 21:02:04 21:02:04 :param assert_same_host: 21:02:04 If ``True``, will make sure that the host of the pool requests is 21:02:04 consistent else will raise HostChangedError. When ``False``, you can 21:02:04 use the pool on an HTTP proxy and request foreign hosts. 21:02:04 21:02:04 :param timeout: 21:02:04 If specified, overrides the default timeout for this one 21:02:04 request. It may be a float (in seconds) or an instance of 21:02:04 :class:`urllib3.util.Timeout`. 21:02:04 21:02:04 :param pool_timeout: 21:02:04 If set and the pool is set to block=True, then this method will 21:02:04 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 21:02:04 connection is available within the time period. 21:02:04 21:02:04 :param bool preload_content: 21:02:04 If True, the response's body will be preloaded into memory. 21:02:04 21:02:04 :param bool decode_content: 21:02:04 If True, will attempt to decode the body based on the 21:02:04 'content-encoding' header. 21:02:04 21:02:04 :param release_conn: 21:02:04 If False, then the urlopen call will not release the connection 21:02:04 back into the pool once a response is received (but will release if 21:02:04 you read the entire contents of the response such as when 21:02:04 `preload_content=True`). This is useful if you're not preloading 21:02:04 the response's content immediately. You will need to call 21:02:04 ``r.release_conn()`` on the response ``r`` to return the connection 21:02:04 back into the pool. If None, it takes the value of ``preload_content`` 21:02:04 which defaults to ``True``. 21:02:04 21:02:04 :param bool chunked: 21:02:04 If True, urllib3 will send the body using chunked transfer 21:02:04 encoding. Otherwise, urllib3 will send the body using the standard 21:02:04 content-length form. Defaults to False. 21:02:04 21:02:04 :param int body_pos: 21:02:04 Position to seek to in file-like body in the event of a retry or 21:02:04 redirect. Typically this won't need to be set because urllib3 will 21:02:04 auto-populate the value when needed. 21:02:04 """ 21:02:04 parsed_url = parse_url(url) 21:02:04 destination_scheme = parsed_url.scheme 21:02:04 21:02:04 if headers is None: 21:02:04 headers = self.headers 21:02:04 21:02:04 if not isinstance(retries, Retry): 21:02:04 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 21:02:04 21:02:04 if release_conn is None: 21:02:04 release_conn = preload_content 21:02:04 21:02:04 # Check host 21:02:04 if assert_same_host and not self.is_same_host(url): 21:02:04 raise HostChangedError(self, url, retries) 21:02:04 21:02:04 # Ensure that the URL we're connecting to is properly encoded 21:02:04 if url.startswith("/"): 21:02:04 url = to_str(_encode_target(url)) 21:02:04 else: 21:02:04 url = to_str(parsed_url.url) 21:02:04 21:02:04 conn = None 21:02:04 21:02:04 # Track whether `conn` needs to be released before 21:02:04 # returning/raising/recursing. Update this variable if necessary, and 21:02:04 # leave `release_conn` constant throughout the function. That way, if 21:02:04 # the function recurses, the original value of `release_conn` will be 21:02:04 # passed down into the recursive call, and its value will be respected. 21:02:04 # 21:02:04 # See issue #651 [1] for details. 21:02:04 # 21:02:04 # [1] 21:02:04 release_this_conn = release_conn 21:02:04 21:02:04 http_tunnel_required = connection_requires_http_tunnel( 21:02:04 self.proxy, self.proxy_config, destination_scheme 21:02:04 ) 21:02:04 21:02:04 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 21:02:04 # have to copy the headers dict so we can safely change it without those 21:02:04 # changes being reflected in anyone else's copy. 21:02:04 if not http_tunnel_required: 21:02:04 headers = headers.copy() # type: ignore[attr-defined] 21:02:04 headers.update(self.proxy_headers) # type: ignore[union-attr] 21:02:04 21:02:04 # Must keep the exception bound to a separate variable or else Python 3 21:02:04 # complains about UnboundLocalError. 21:02:04 err = None 21:02:04 21:02:04 # Keep track of whether we cleanly exited the except block. This 21:02:04 # ensures we do proper cleanup in finally. 21:02:04 clean_exit = False 21:02:04 21:02:04 # Rewind body position, if needed. Record current position 21:02:04 # for future rewinds in the event of a redirect/retry. 21:02:04 body_pos = set_file_position(body, body_pos) 21:02:04 21:02:04 try: 21:02:04 # Request a connection from the queue. 21:02:04 timeout_obj = self._get_timeout(timeout) 21:02:04 conn = self._get_conn(timeout=pool_timeout) 21:02:04 21:02:04 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 21:02:04 21:02:04 # Is this a closed/new connection that requires CONNECT tunnelling? 21:02:04 if self.proxy is not None and http_tunnel_required and conn.is_closed: 21:02:04 try: 21:02:04 self._prepare_proxy(conn) 21:02:04 except (BaseSSLError, OSError, SocketTimeout) as e: 21:02:04 self._raise_timeout( 21:02:04 err=e, url=self.proxy.url, timeout_value=conn.timeout 21:02:04 ) 21:02:04 raise 21:02:04 21:02:04 # If we're going to release the connection in ``finally:``, then 21:02:04 # the response doesn't need to know about the connection. Otherwise 21:02:04 # it will also try to release it and we'll have a double-release 21:02:04 # mess. 21:02:04 response_conn = conn if not release_conn else None 21:02:04 21:02:04 # Make the request on the HTTPConnection object 21:02:04 > response = self._make_request( 21:02:04 conn, 21:02:04 method, 21:02:04 url, 21:02:04 timeout=timeout_obj, 21:02:04 body=body, 21:02:04 headers=headers, 21:02:04 chunked=chunked, 21:02:04 retries=retries, 21:02:04 response_conn=response_conn, 21:02:04 preload_content=preload_content, 21:02:04 decode_content=decode_content, 21:02:04 **response_kw, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 21:02:04 conn.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 21:02:04 self.endheaders() 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 21:02:04 self._send_output(message_body, encode_chunked=encode_chunked) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 21:02:04 self.send(msg) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 21:02:04 self.connect() 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 21:02:04 self.sock = self._new_conn() 21:02:04 ^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 except socket.gaierror as e: 21:02:04 raise NameResolutionError(self.host, self, e) from e 21:02:04 except SocketTimeout as e: 21:02:04 raise ConnectTimeoutError( 21:02:04 self, 21:02:04 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 21:02:04 ) from e 21:02:04 21:02:04 except OSError as e: 21:02:04 > raise NewConnectionError( 21:02:04 self, f"Failed to establish a new connection: {e}" 21:02:04 ) from e 21:02:04 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 > resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 21:02:04 retries = retries.increment( 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig' 21:02:04 response = None 21:02:04 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 21:02:04 _pool = 21:02:04 _stacktrace = 21:02:04 21:02:04 def increment( 21:02:04 self, 21:02:04 method: str | None = None, 21:02:04 url: str | None = None, 21:02:04 response: BaseHTTPResponse | None = None, 21:02:04 error: Exception | None = None, 21:02:04 _pool: ConnectionPool | None = None, 21:02:04 _stacktrace: TracebackType | None = None, 21:02:04 ) -> Self: 21:02:04 """Return a new Retry object with incremented retry counters. 21:02:04 21:02:04 :param response: A response object, or None, if the server did not 21:02:04 return a response. 21:02:04 :type response: :class:`~urllib3.response.BaseHTTPResponse` 21:02:04 :param Exception error: An error encountered during the request, or 21:02:04 None if the response was received successfully. 21:02:04 21:02:04 :return: A new ``Retry`` object. 21:02:04 """ 21:02:04 if self.total is False and error: 21:02:04 # Disabled, indicate to re-raise the error. 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 21:02:04 total = self.total 21:02:04 if total is not None: 21:02:04 total -= 1 21:02:04 21:02:04 connect = self.connect 21:02:04 read = self.read 21:02:04 redirect = self.redirect 21:02:04 status_count = self.status 21:02:04 other = self.other 21:02:04 cause = "unknown" 21:02:04 status = None 21:02:04 redirect_location = None 21:02:04 21:02:04 if error and self._is_connection_error(error): 21:02:04 # Connect retry? 21:02:04 if connect is False: 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif connect is not None: 21:02:04 connect -= 1 21:02:04 21:02:04 elif error and self._is_read_error(error): 21:02:04 # Read retry? 21:02:04 if read is False or method is None or not self._is_method_retryable(method): 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif read is not None: 21:02:04 read -= 1 21:02:04 21:02:04 elif error: 21:02:04 # Other retry? 21:02:04 if other is not None: 21:02:04 other -= 1 21:02:04 21:02:04 elif response and response.get_redirect_location(): 21:02:04 # Redirect retry? 21:02:04 if redirect is not None: 21:02:04 redirect -= 1 21:02:04 cause = "too many redirects" 21:02:04 response_redirect_location = response.get_redirect_location() 21:02:04 if response_redirect_location: 21:02:04 redirect_location = response_redirect_location 21:02:04 status = response.status 21:02:04 21:02:04 else: 21:02:04 # Incrementing because of a server error like a 500 in 21:02:04 # status_forcelist and the given method is in the allowed_methods 21:02:04 cause = ResponseError.GENERIC_ERROR 21:02:04 if response and response.status: 21:02:04 if status_count is not None: 21:02:04 status_count -= 1 21:02:04 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 21:02:04 status = response.status 21:02:04 21:02:04 history = self.history + ( 21:02:04 RequestHistory(method, url, error, status, redirect_location), 21:02:04 ) 21:02:04 21:02:04 new_retry = self.new( 21:02:04 total=total, 21:02:04 connect=connect, 21:02:04 read=read, 21:02:04 redirect=redirect, 21:02:04 status=status_count, 21:02:04 other=other, 21:02:04 history=history, 21:02:04 ) 21:02:04 21:02:04 if new_retry.is_exhausted(): 21:02:04 reason = error or ResponseError(cause) 21:02:04 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 21:02:04 21:02:04 During handling of the above exception, another exception occurred: 21:02:04 21:02:04 self = 21:02:04 21:02:04 def test_20_rdm_device_disconnected(self): 21:02:04 > response = test_utils.check_device_connection("ROADMA01") 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 21:02:04 transportpce_tests/1.2.1/test01_portmapping.py:217: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 transportpce_tests/common/test_utils.py:409: in check_device_connection 21:02:04 response = get_request(url[RESTCONF_VERSION].format('{}', node)) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 transportpce_tests/common/test_utils.py:117: in get_request 21:02:04 return requests.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 21:02:04 return session.request(method=method, url=url, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 21:02:04 resp = self.send(prep, **send_kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 21:02:04 r = adapter.send(request, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 except (ProtocolError, OSError) as err: 21:02:04 raise ConnectionError(err, request=request) 21:02:04 21:02:04 except MaxRetryError as e: 21:02:04 if isinstance(e.reason, ConnectTimeoutError): 21:02:04 # TODO: Remove this in 3.0.0: see #2811 21:02:04 if not isinstance(e.reason, NewConnectionError): 21:02:04 raise ConnectTimeout(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, ResponseError): 21:02:04 raise RetryError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _ProxyError): 21:02:04 raise ProxyError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _SSLError): 21:02:04 # This branch is for urllib3 v1.22 and later. 21:02:04 raise SSLError(e, request=request) 21:02:04 21:02:04 > raise ConnectionError(e, request=request) 21:02:04 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 21:02:04 ----------------------------- Captured stdout call ----------------------------- 21:02:04 execution of test_20_rdm_device_disconnected 21:02:04 _________ TestTransportPCEPortmapping.test_21_rdm_device_not_connected _________ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 > sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:204: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 21:02:04 raise err 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 address = ('localhost', 8191), timeout = 30, source_address = None 21:02:04 socket_options = [(6, 1, 1)] 21:02:04 21:02:04 def create_connection( 21:02:04 address: tuple[str, int], 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 source_address: tuple[str, int] | None = None, 21:02:04 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 21:02:04 ) -> socket.socket: 21:02:04 """Connect to *address* and return the socket object. 21:02:04 21:02:04 Convenience function. Connect to *address* (a 2-tuple ``(host, 21:02:04 port)``) and return the socket object. Passing the optional 21:02:04 *timeout* parameter will set the timeout on the socket instance 21:02:04 before attempting to connect. If no *timeout* is supplied, the 21:02:04 global default timeout setting returned by :func:`socket.getdefaulttimeout` 21:02:04 is used. If *source_address* is set it must be a tuple of (host, port) 21:02:04 for the socket to bind as a source address before making the connection. 21:02:04 An host of '' or port 0 tells the OS to use the default. 21:02:04 """ 21:02:04 21:02:04 host, port = address 21:02:04 if host.startswith("["): 21:02:04 host = host.strip("[]") 21:02:04 err = None 21:02:04 21:02:04 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 21:02:04 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 21:02:04 # The original create_connection function always returns all records. 21:02:04 family = allowed_gai_family() 21:02:04 21:02:04 try: 21:02:04 host.encode("idna") 21:02:04 except UnicodeError: 21:02:04 raise LocationParseError(f"'{host}', label empty or too long") from None 21:02:04 21:02:04 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 21:02:04 af, socktype, proto, canonname, sa = res 21:02:04 sock = None 21:02:04 try: 21:02:04 sock = socket.socket(af, socktype, proto) 21:02:04 21:02:04 # If provided, set socket level options before connecting. 21:02:04 _set_socket_options(sock, socket_options) 21:02:04 21:02:04 if timeout is not _DEFAULT_TIMEOUT: 21:02:04 sock.settimeout(timeout) 21:02:04 if source_address: 21:02:04 sock.bind(source_address) 21:02:04 > sock.connect(sa) 21:02:04 E ConnectionRefusedError: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info' 21:02:04 body = None 21:02:04 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 21:02:04 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 redirect = False, assert_same_host = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 21:02:04 release_conn = False, chunked = False, body_pos = None, preload_content = False 21:02:04 decode_content = False, response_kw = {} 21:02:04 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info', query=None, fragment=None) 21:02:04 destination_scheme = None, conn = None, release_this_conn = True 21:02:04 http_tunnel_required = False, err = None, clean_exit = False 21:02:04 21:02:04 def urlopen( # type: ignore[override] 21:02:04 self, 21:02:04 method: str, 21:02:04 url: str, 21:02:04 body: _TYPE_BODY | None = None, 21:02:04 headers: typing.Mapping[str, str] | None = None, 21:02:04 retries: Retry | bool | int | None = None, 21:02:04 redirect: bool = True, 21:02:04 assert_same_host: bool = True, 21:02:04 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 21:02:04 pool_timeout: int | None = None, 21:02:04 release_conn: bool | None = None, 21:02:04 chunked: bool = False, 21:02:04 body_pos: _TYPE_BODY_POSITION | None = None, 21:02:04 preload_content: bool = True, 21:02:04 decode_content: bool = True, 21:02:04 **response_kw: typing.Any, 21:02:04 ) -> BaseHTTPResponse: 21:02:04 """ 21:02:04 Get a connection from the pool and perform an HTTP request. This is the 21:02:04 lowest level call for making a request, so you'll need to specify all 21:02:04 the raw details. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 More commonly, it's appropriate to use a convenience method 21:02:04 such as :meth:`request`. 21:02:04 21:02:04 .. note:: 21:02:04 21:02:04 `release_conn` will only behave as expected if 21:02:04 `preload_content=False` because we want to make 21:02:04 `preload_content=False` the default behaviour someday soon without 21:02:04 breaking backwards compatibility. 21:02:04 21:02:04 :param method: 21:02:04 HTTP request method (such as GET, POST, PUT, etc.) 21:02:04 21:02:04 :param url: 21:02:04 The URL to perform the request on. 21:02:04 21:02:04 :param body: 21:02:04 Data to send in the request body, either :class:`str`, :class:`bytes`, 21:02:04 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 21:02:04 21:02:04 :param headers: 21:02:04 Dictionary of custom headers to send, such as User-Agent, 21:02:04 If-None-Match, etc. If None, pool headers are used. If provided, 21:02:04 these headers completely replace any pool-specific headers. 21:02:04 21:02:04 :param retries: 21:02:04 Configure the number of retries to allow before raising a 21:02:04 :class:`~urllib3.exceptions.MaxRetryError` exception. 21:02:04 21:02:04 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 21:02:04 :class:`~urllib3.util.retry.Retry` object for fine-grained control 21:02:04 over different types of retries. 21:02:04 Pass an integer number to retry connection errors that many times, 21:02:04 but no other types of errors. Pass zero to never retry. 21:02:04 21:02:04 If ``False``, then retries are disabled and any exception is raised 21:02:04 immediately. Also, instead of raising a MaxRetryError on redirects, 21:02:04 the redirect response will be returned. 21:02:04 21:02:04 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 21:02:04 21:02:04 :param redirect: 21:02:04 If True, automatically handle redirects (status codes 301, 302, 21:02:04 303, 307, 308). Each redirect counts as a retry. Disabling retries 21:02:04 will disable redirect, too. 21:02:04 21:02:04 :param assert_same_host: 21:02:04 If ``True``, will make sure that the host of the pool requests is 21:02:04 consistent else will raise HostChangedError. When ``False``, you can 21:02:04 use the pool on an HTTP proxy and request foreign hosts. 21:02:04 21:02:04 :param timeout: 21:02:04 If specified, overrides the default timeout for this one 21:02:04 request. It may be a float (in seconds) or an instance of 21:02:04 :class:`urllib3.util.Timeout`. 21:02:04 21:02:04 :param pool_timeout: 21:02:04 If set and the pool is set to block=True, then this method will 21:02:04 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 21:02:04 connection is available within the time period. 21:02:04 21:02:04 :param bool preload_content: 21:02:04 If True, the response's body will be preloaded into memory. 21:02:04 21:02:04 :param bool decode_content: 21:02:04 If True, will attempt to decode the body based on the 21:02:04 'content-encoding' header. 21:02:04 21:02:04 :param release_conn: 21:02:04 If False, then the urlopen call will not release the connection 21:02:04 back into the pool once a response is received (but will release if 21:02:04 you read the entire contents of the response such as when 21:02:04 `preload_content=True`). This is useful if you're not preloading 21:02:04 the response's content immediately. You will need to call 21:02:04 ``r.release_conn()`` on the response ``r`` to return the connection 21:02:04 back into the pool. If None, it takes the value of ``preload_content`` 21:02:04 which defaults to ``True``. 21:02:04 21:02:04 :param bool chunked: 21:02:04 If True, urllib3 will send the body using chunked transfer 21:02:04 encoding. Otherwise, urllib3 will send the body using the standard 21:02:04 content-length form. Defaults to False. 21:02:04 21:02:04 :param int body_pos: 21:02:04 Position to seek to in file-like body in the event of a retry or 21:02:04 redirect. Typically this won't need to be set because urllib3 will 21:02:04 auto-populate the value when needed. 21:02:04 """ 21:02:04 parsed_url = parse_url(url) 21:02:04 destination_scheme = parsed_url.scheme 21:02:04 21:02:04 if headers is None: 21:02:04 headers = self.headers 21:02:04 21:02:04 if not isinstance(retries, Retry): 21:02:04 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 21:02:04 21:02:04 if release_conn is None: 21:02:04 release_conn = preload_content 21:02:04 21:02:04 # Check host 21:02:04 if assert_same_host and not self.is_same_host(url): 21:02:04 raise HostChangedError(self, url, retries) 21:02:04 21:02:04 # Ensure that the URL we're connecting to is properly encoded 21:02:04 if url.startswith("/"): 21:02:04 url = to_str(_encode_target(url)) 21:02:04 else: 21:02:04 url = to_str(parsed_url.url) 21:02:04 21:02:04 conn = None 21:02:04 21:02:04 # Track whether `conn` needs to be released before 21:02:04 # returning/raising/recursing. Update this variable if necessary, and 21:02:04 # leave `release_conn` constant throughout the function. That way, if 21:02:04 # the function recurses, the original value of `release_conn` will be 21:02:04 # passed down into the recursive call, and its value will be respected. 21:02:04 # 21:02:04 # See issue #651 [1] for details. 21:02:04 # 21:02:04 # [1] 21:02:04 release_this_conn = release_conn 21:02:04 21:02:04 http_tunnel_required = connection_requires_http_tunnel( 21:02:04 self.proxy, self.proxy_config, destination_scheme 21:02:04 ) 21:02:04 21:02:04 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 21:02:04 # have to copy the headers dict so we can safely change it without those 21:02:04 # changes being reflected in anyone else's copy. 21:02:04 if not http_tunnel_required: 21:02:04 headers = headers.copy() # type: ignore[attr-defined] 21:02:04 headers.update(self.proxy_headers) # type: ignore[union-attr] 21:02:04 21:02:04 # Must keep the exception bound to a separate variable or else Python 3 21:02:04 # complains about UnboundLocalError. 21:02:04 err = None 21:02:04 21:02:04 # Keep track of whether we cleanly exited the except block. This 21:02:04 # ensures we do proper cleanup in finally. 21:02:04 clean_exit = False 21:02:04 21:02:04 # Rewind body position, if needed. Record current position 21:02:04 # for future rewinds in the event of a redirect/retry. 21:02:04 body_pos = set_file_position(body, body_pos) 21:02:04 21:02:04 try: 21:02:04 # Request a connection from the queue. 21:02:04 timeout_obj = self._get_timeout(timeout) 21:02:04 conn = self._get_conn(timeout=pool_timeout) 21:02:04 21:02:04 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 21:02:04 21:02:04 # Is this a closed/new connection that requires CONNECT tunnelling? 21:02:04 if self.proxy is not None and http_tunnel_required and conn.is_closed: 21:02:04 try: 21:02:04 self._prepare_proxy(conn) 21:02:04 except (BaseSSLError, OSError, SocketTimeout) as e: 21:02:04 self._raise_timeout( 21:02:04 err=e, url=self.proxy.url, timeout_value=conn.timeout 21:02:04 ) 21:02:04 raise 21:02:04 21:02:04 # If we're going to release the connection in ``finally:``, then 21:02:04 # the response doesn't need to know about the connection. Otherwise 21:02:04 # it will also try to release it and we'll have a double-release 21:02:04 # mess. 21:02:04 response_conn = conn if not release_conn else None 21:02:04 21:02:04 # Make the request on the HTTPConnection object 21:02:04 > response = self._make_request( 21:02:04 conn, 21:02:04 method, 21:02:04 url, 21:02:04 timeout=timeout_obj, 21:02:04 body=body, 21:02:04 headers=headers, 21:02:04 chunked=chunked, 21:02:04 retries=retries, 21:02:04 response_conn=response_conn, 21:02:04 preload_content=preload_content, 21:02:04 decode_content=decode_content, 21:02:04 **response_kw, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 21:02:04 conn.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:500: in request 21:02:04 self.endheaders() 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 21:02:04 self._send_output(message_body, encode_chunked=encode_chunked) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 21:02:04 self.send(msg) 21:02:04 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 21:02:04 self.connect() 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 21:02:04 self.sock = self._new_conn() 21:02:04 ^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 21:02:04 def _new_conn(self) -> socket.socket: 21:02:04 """Establish a socket connection and set nodelay settings on it. 21:02:04 21:02:04 :return: New socket connection. 21:02:04 """ 21:02:04 try: 21:02:04 sock = connection.create_connection( 21:02:04 (self._dns_host, self.port), 21:02:04 self.timeout, 21:02:04 source_address=self.source_address, 21:02:04 socket_options=self.socket_options, 21:02:04 ) 21:02:04 except socket.gaierror as e: 21:02:04 raise NameResolutionError(self.host, self, e) from e 21:02:04 except SocketTimeout as e: 21:02:04 raise ConnectTimeoutError( 21:02:04 self, 21:02:04 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 21:02:04 ) from e 21:02:04 21:02:04 except OSError as e: 21:02:04 > raise NewConnectionError( 21:02:04 self, f"Failed to establish a new connection: {e}" 21:02:04 ) from e 21:02:04 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 21:02:04 21:02:04 The above exception was the direct cause of the following exception: 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 > resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:644: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 21:02:04 retries = retries.increment( 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 21:02:04 method = 'GET' 21:02:04 url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info' 21:02:04 response = None 21:02:04 error = NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused") 21:02:04 _pool = 21:02:04 _stacktrace = 21:02:04 21:02:04 def increment( 21:02:04 self, 21:02:04 method: str | None = None, 21:02:04 url: str | None = None, 21:02:04 response: BaseHTTPResponse | None = None, 21:02:04 error: Exception | None = None, 21:02:04 _pool: ConnectionPool | None = None, 21:02:04 _stacktrace: TracebackType | None = None, 21:02:04 ) -> Self: 21:02:04 """Return a new Retry object with incremented retry counters. 21:02:04 21:02:04 :param response: A response object, or None, if the server did not 21:02:04 return a response. 21:02:04 :type response: :class:`~urllib3.response.BaseHTTPResponse` 21:02:04 :param Exception error: An error encountered during the request, or 21:02:04 None if the response was received successfully. 21:02:04 21:02:04 :return: A new ``Retry`` object. 21:02:04 """ 21:02:04 if self.total is False and error: 21:02:04 # Disabled, indicate to re-raise the error. 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 21:02:04 total = self.total 21:02:04 if total is not None: 21:02:04 total -= 1 21:02:04 21:02:04 connect = self.connect 21:02:04 read = self.read 21:02:04 redirect = self.redirect 21:02:04 status_count = self.status 21:02:04 other = self.other 21:02:04 cause = "unknown" 21:02:04 status = None 21:02:04 redirect_location = None 21:02:04 21:02:04 if error and self._is_connection_error(error): 21:02:04 # Connect retry? 21:02:04 if connect is False: 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif connect is not None: 21:02:04 connect -= 1 21:02:04 21:02:04 elif error and self._is_read_error(error): 21:02:04 # Read retry? 21:02:04 if read is False or method is None or not self._is_method_retryable(method): 21:02:04 raise reraise(type(error), error, _stacktrace) 21:02:04 elif read is not None: 21:02:04 read -= 1 21:02:04 21:02:04 elif error: 21:02:04 # Other retry? 21:02:04 if other is not None: 21:02:04 other -= 1 21:02:04 21:02:04 elif response and response.get_redirect_location(): 21:02:04 # Redirect retry? 21:02:04 if redirect is not None: 21:02:04 redirect -= 1 21:02:04 cause = "too many redirects" 21:02:04 response_redirect_location = response.get_redirect_location() 21:02:04 if response_redirect_location: 21:02:04 redirect_location = response_redirect_location 21:02:04 status = response.status 21:02:04 21:02:04 else: 21:02:04 # Incrementing because of a server error like a 500 in 21:02:04 # status_forcelist and the given method is in the allowed_methods 21:02:04 cause = ResponseError.GENERIC_ERROR 21:02:04 if response and response.status: 21:02:04 if status_count is not None: 21:02:04 status_count -= 1 21:02:04 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 21:02:04 status = response.status 21:02:04 21:02:04 history = self.history + ( 21:02:04 RequestHistory(method, url, error, status, redirect_location), 21:02:04 ) 21:02:04 21:02:04 new_retry = self.new( 21:02:04 total=total, 21:02:04 connect=connect, 21:02:04 read=read, 21:02:04 redirect=redirect, 21:02:04 status=status_count, 21:02:04 other=other, 21:02:04 history=history, 21:02:04 ) 21:02:04 21:02:04 if new_retry.is_exhausted(): 21:02:04 reason = error or ResponseError(cause) 21:02:04 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 21:02:04 21:02:04 During handling of the above exception, another exception occurred: 21:02:04 21:02:04 self = 21:02:04 21:02:04 def test_21_rdm_device_not_connected(self): 21:02:04 > response = test_utils.get_portmapping_node_attr("ROADMA01", "node-info", None) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 21:02:04 transportpce_tests/1.2.1/test01_portmapping.py:225: 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 transportpce_tests/common/test_utils.py:519: in get_portmapping_node_attr 21:02:04 response = get_request(target_url) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 transportpce_tests/common/test_utils.py:117: in get_request 21:02:04 return requests.request( 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 21:02:04 return session.request(method=method, url=url, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 21:02:04 resp = self.send(prep, **send_kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 21:02:04 r = adapter.send(request, **kwargs) 21:02:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21:02:04 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 21:02:04 21:02:04 self = 21:02:04 request = , stream = False 21:02:04 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 21:02:04 proxies = OrderedDict() 21:02:04 21:02:04 def send( 21:02:04 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 21:02:04 ): 21:02:04 """Sends PreparedRequest object. Returns Response object. 21:02:04 21:02:04 :param request: The :class:`PreparedRequest ` being sent. 21:02:04 :param stream: (optional) Whether to stream the request content. 21:02:04 :param timeout: (optional) How long to wait for the server to send 21:02:04 data before giving up, as a float, or a :ref:`(connect timeout, 21:02:04 read timeout) ` tuple. 21:02:04 :type timeout: float or tuple or urllib3 Timeout object 21:02:04 :param verify: (optional) Either a boolean, in which case it controls whether 21:02:04 we verify the server's TLS certificate, or a string, in which case it 21:02:04 must be a path to a CA bundle to use 21:02:04 :param cert: (optional) Any user-provided SSL certificate to be trusted. 21:02:04 :param proxies: (optional) The proxies dictionary to apply to the request. 21:02:04 :rtype: requests.Response 21:02:04 """ 21:02:04 21:02:04 try: 21:02:04 conn = self.get_connection_with_tls_context( 21:02:04 request, verify, proxies=proxies, cert=cert 21:02:04 ) 21:02:04 except LocationValueError as e: 21:02:04 raise InvalidURL(e, request=request) 21:02:04 21:02:04 self.cert_verify(conn, request.url, verify, cert) 21:02:04 url = self.request_url(request, proxies) 21:02:04 self.add_headers( 21:02:04 request, 21:02:04 stream=stream, 21:02:04 timeout=timeout, 21:02:04 verify=verify, 21:02:04 cert=cert, 21:02:04 proxies=proxies, 21:02:04 ) 21:02:04 21:02:04 chunked = not (request.body is None or "Content-Length" in request.headers) 21:02:04 21:02:04 if isinstance(timeout, tuple): 21:02:04 try: 21:02:04 connect, read = timeout 21:02:04 timeout = TimeoutSauce(connect=connect, read=read) 21:02:04 except ValueError: 21:02:04 raise ValueError( 21:02:04 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 21:02:04 f"or a single float to set both timeouts to the same value." 21:02:04 ) 21:02:04 elif isinstance(timeout, TimeoutSauce): 21:02:04 pass 21:02:04 else: 21:02:04 timeout = TimeoutSauce(connect=timeout, read=timeout) 21:02:04 21:02:04 try: 21:02:04 resp = conn.urlopen( 21:02:04 method=request.method, 21:02:04 url=url, 21:02:04 body=request.body, 21:02:04 headers=request.headers, 21:02:04 redirect=False, 21:02:04 assert_same_host=False, 21:02:04 preload_content=False, 21:02:04 decode_content=False, 21:02:04 retries=self.max_retries, 21:02:04 timeout=timeout, 21:02:04 chunked=chunked, 21:02:04 ) 21:02:04 21:02:04 except (ProtocolError, OSError) as err: 21:02:04 raise ConnectionError(err, request=request) 21:02:04 21:02:04 except MaxRetryError as e: 21:02:04 if isinstance(e.reason, ConnectTimeoutError): 21:02:04 # TODO: Remove this in 3.0.0: see #2811 21:02:04 if not isinstance(e.reason, NewConnectionError): 21:02:04 raise ConnectTimeout(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, ResponseError): 21:02:04 raise RetryError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _ProxyError): 21:02:04 raise ProxyError(e, request=request) 21:02:04 21:02:04 if isinstance(e.reason, _SSLError): 21:02:04 # This branch is for urllib3 v1.22 and later. 21:02:04 raise SSLError(e, request=request) 21:02:04 21:02:04 > raise ConnectionError(e, request=request) 21:02:04 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8191): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8191): Failed to establish a new connection: [Errno 111] Connection refused")) 21:02:04 21:02:04 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 21:02:04 ----------------------------- Captured stdout call ----------------------------- 21:02:04 execution of test_21_rdm_device_not_connected 21:02:04 --------------------------- Captured stdout teardown --------------------------- 21:02:04 all processes killed 21:02:04 ODL log file stored 21:02:04 =========================== short test summary info ============================ 21:02:04 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_02_rdm_device_connected 21:02:04 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_03_rdm_portmapping_info 21:02:04 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_04_rdm_portmapping_DEG1_TTP_TXRX 21:02:04 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_05_rdm_portmapping_SRG1_PP7_TXRX 21:02:04 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_06_rdm_portmapping_SRG3_PP1_TXRX 21:02:04 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_07_xpdr_device_connection 21:02:04 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_08_xpdr_device_connected 21:02:04 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_09_xpdr_portmapping_info 21:02:04 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_10_xpdr_portmapping_NETWORK1 21:02:04 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_11_xpdr_portmapping_NETWORK2 21:02:04 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_12_xpdr_portmapping_CLIENT1 21:02:04 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_13_xpdr_portmapping_CLIENT2 21:02:04 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_14_xpdr_portmapping_CLIENT3 21:02:04 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_15_xpdr_portmapping_CLIENT4 21:02:04 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_16_xpdr_device_disconnection 21:02:04 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_17_xpdr_device_disconnected 21:02:04 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_18_xpdr_device_not_connected 21:02:04 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_19_rdm_device_disconnection 21:02:04 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_20_rdm_device_disconnected 21:02:04 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TestTransportPCEPortmapping::test_21_rdm_device_not_connected 21:02:04 20 failed, 1 passed in 281.70s (0:04:41) 21:02:04 tests200: FAIL ✖ in 1 minute 40.98 seconds 21:02:04 tests71: OK ✔ in 7 minutes 41.02 seconds 21:02:04 tests121: exit 1 (282.27 seconds) /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 1.2.1 pid=8478 21:04:21 .... [100%] 21:05:34 51 passed in 492.80s (0:08:12) 21:05:34 pytest -q transportpce_tests/tapi/test02_full_topology.py 21:06:30 .................................... [100%] 21:11:08 36 passed in 333.23s (0:05:33) 21:11:08 pytest -q transportpce_tests/tapi/test03_tapi_device_change_notifications.py 21:12:00 ....................................................................... [100%] 21:16:32 71 passed in 323.74s (0:05:23) 21:16:32 pytest -q transportpce_tests/tapi/test04_topo_extension.py 21:17:25 ................... [100%] 21:18:57 19 passed in 144.47s (0:02:24) 21:18:57 pytest -q transportpce_tests/tapi/test05_pce_tapi.py 21:21:03 ...................... [100%] 21:26:39 22 passed in 462.30s (0:07:42) 21:26:39 tests121: FAIL ✖ in 4 minutes 51.62 seconds 21:26:39 tests_tapi: OK ✔ in 29 minutes 27.14 seconds 21:26:39 tests221: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 21:26:46 tests221: freeze> python -m pip freeze --all 21:26:47 tests221: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 21:26:47 tests221: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 2.2.1 21:26:47 using environment variables from ./karaf221.env 21:26:47 pytest -q transportpce_tests/2.2.1/test01_portmapping.py 21:27:24 ................................... [100%] 21:28:04 35 passed in 77.28s (0:01:17) 21:28:04 pytest -q transportpce_tests/2.2.1/test02_topo_portmapping.py 21:28:37 ...... [100%] 21:28:51 6 passed in 46.22s 21:28:51 pytest -q transportpce_tests/2.2.1/test03_topology.py 21:29:36 ............................................ [100%] 21:31:12 44 passed in 141.41s (0:02:21) 21:31:13 pytest -q transportpce_tests/2.2.1/test04_otn_topology.py 21:31:51 ............ [100%] 21:32:15 12 passed in 61.97s (0:01:01) 21:32:15 pytest -q transportpce_tests/2.2.1/test05_flex_grid.py 21:32:43 ................ [100%] 21:34:12 16 passed in 116.67s (0:01:56) 21:34:12 pytest -q transportpce_tests/2.2.1/test06_renderer_service_path_nominal.py 21:34:43 ............................... [100%] 21:34:51 31 passed in 38.44s 21:34:51 pytest -q transportpce_tests/2.2.1/test07_otn_renderer.py 21:35:28 .......................... [100%] 21:36:24 26 passed in 93.12s (0:01:33) 21:36:24 pytest -q transportpce_tests/2.2.1/test08_otn_sh_renderer.py 21:37:03 ...................... [100%] 21:38:07 22 passed in 102.44s (0:01:42) 21:38:07 pytest -q transportpce_tests/2.2.1/test09_olm.py 21:38:50 ........................................ [100%] 21:44:14 40 passed in 367.00s (0:06:06) 21:44:14 pytest -q transportpce_tests/2.2.1/test11_otn_end2end.py 21:44:59 ........................................................................ [ 74%] 21:50:37 ......................... [100%] 21:52:29 97 passed in 494.63s (0:08:14) 21:52:29 pytest -q transportpce_tests/2.2.1/test12_end2end.py 21:53:10 ...................................................... [100%] 21:59:59 54 passed in 449.32s (0:07:29) 21:59:59 pytest -q transportpce_tests/2.2.1/test14_otn_switch_end2end.py 22:00:55 ........................................................................ [ 71%] 22:06:05 ............................. [100%] 22:11:14 101 passed in 674.96s (0:11:14) 22:11:14 pytest -q transportpce_tests/2.2.1/test15_otn_end2end_with_intermediate_switch.py 22:12:08 ........................................................................ [ 67%] 22:17:54 ................................... [100%] 22:21:15 107 passed in 600.79s (0:10:00) 22:21:15 pytest -q transportpce_tests/2.2.1/test16_freq_end2end.py 22:21:57 ............................................. [100%] 22:24:35 45 passed in 199.73s (0:03:19) 22:24:35 tests221: OK ✔ in 57 minutes 55.79 seconds 22:24:35 tests_hybrid: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 22:24:42 tests_hybrid: freeze> python -m pip freeze --all 22:24:42 tests_hybrid: bcrypt==5.0.0,certifi==2026.2.25,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 22:24:42 tests_hybrid: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh hybrid 22:24:42 using environment variables from ./karaf221.env 22:24:42 pytest -q transportpce_tests/hybrid/test01_device_change_notifications.py 22:25:22 ................................................... [100%] 22:30:09 51 passed in 326.63s (0:05:26) 22:30:09 pytest -q transportpce_tests/hybrid/test02_B100G_end2end.py 22:30:51 ........................................................................ [ 66%] 22:35:11 ..................................... [100%] 22:37:17 109 passed in 427.96s (0:07:07) 22:37:17 pytest -q transportpce_tests/hybrid/test03_autonomous_reroute.py 22:38:04 ..................................................... [100%] 22:44:37 53 passed in 439.26s (0:07:19) 22:44:37 buildcontroller: OK (113.57=setup[9.06]+cmd[104.51] seconds) 22:44:37 sims: OK (21.22=setup[16.28]+cmd[4.95] seconds) 22:44:37 build_karaf_tests121: OK (75.79=setup[8.34]+cmd[67.44] seconds) 22:44:37 testsPCE: FAIL code 1 (265.14=setup[66.89]+cmd[198.25] seconds) 22:44:37 tests121: FAIL code 1 (291.62=setup[9.35]+cmd[282.27] seconds) 22:44:37 build_karaf_tests221: OK (74.15=setup[8.29]+cmd[65.86] seconds) 22:44:37 tests_tapi: OK (1767.14=setup[8.91]+cmd[1758.22] seconds) 22:44:37 tests221: OK (3475.79=setup[7.44]+cmd[3468.36] seconds) 22:44:37 build_karaf_tests71: OK (74.13=setup[8.33]+cmd[65.79] seconds) 22:44:37 tests71: OK (461.02=setup[8.05]+cmd[452.97] seconds) 22:44:37 build_karaf_tests200: OK (74.16=setup[8.36]+cmd[65.81] seconds) 22:44:37 tests200: FAIL code 1 (100.98=setup[9.21]+cmd[91.77] seconds) 22:44:37 tests_hybrid: OK (1201.98=setup[7.27]+cmd[1194.71] seconds) 22:44:37 buildlighty: OK (47.56=setup[15.87]+cmd[31.69] seconds) 22:44:37 docs: OK (33.42=setup[28.53]+cmd[4.89] seconds) 22:44:37 docs-linkcheck: OK (35.56=setup[30.11]+cmd[5.44] seconds) 22:44:37 checkbashisms: OK (3.34=setup[2.04]+cmd[0.01,0.05,1.24] seconds) 22:44:37 pre-commit: OK (52.39=setup[3.41]+cmd[0.01,0.01,40.17,8.79] seconds) 22:44:37 pylint: OK (34.77=setup[6.05]+cmd[28.72] seconds) 22:44:37 evaluation failed :( (6899.46 seconds) 22:44:37 + tox_status=1 22:44:37 + echo '---> Completed tox runs' 22:44:37 ---> Completed tox runs 22:44:37 + for i in .tox/*/log 22:44:37 ++ echo .tox/build_karaf_tests121/log 22:44:37 ++ awk -F/ '{print $2}' 22:44:37 + tox_env=build_karaf_tests121 22:44:37 + cp -r .tox/build_karaf_tests121/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests121 22:44:37 + for i in .tox/*/log 22:44:37 ++ echo .tox/build_karaf_tests200/log 22:44:37 ++ awk -F/ '{print $2}' 22:44:37 + tox_env=build_karaf_tests200 22:44:37 + cp -r .tox/build_karaf_tests200/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests200 22:44:37 + for i in .tox/*/log 22:44:37 ++ echo .tox/build_karaf_tests221/log 22:44:37 ++ awk -F/ '{print $2}' 22:44:37 + tox_env=build_karaf_tests221 22:44:37 + cp -r .tox/build_karaf_tests221/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests221 22:44:37 + for i in .tox/*/log 22:44:37 ++ echo .tox/build_karaf_tests71/log 22:44:37 ++ awk -F/ '{print $2}' 22:44:37 + tox_env=build_karaf_tests71 22:44:37 + cp -r .tox/build_karaf_tests71/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests71 22:44:37 + for i in .tox/*/log 22:44:37 ++ echo .tox/buildcontroller/log 22:44:37 ++ awk -F/ '{print $2}' 22:44:37 + tox_env=buildcontroller 22:44:37 + cp -r .tox/buildcontroller/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/buildcontroller 22:44:37 + for i in .tox/*/log 22:44:37 ++ echo .tox/buildlighty/log 22:44:37 ++ awk -F/ '{print $2}' 22:44:37 + tox_env=buildlighty 22:44:37 + cp -r .tox/buildlighty/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/buildlighty 22:44:37 + for i in .tox/*/log 22:44:37 ++ echo .tox/checkbashisms/log 22:44:37 ++ awk -F/ '{print $2}' 22:44:37 + tox_env=checkbashisms 22:44:37 + cp -r .tox/checkbashisms/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/checkbashisms 22:44:37 + for i in .tox/*/log 22:44:37 ++ echo .tox/docs-linkcheck/log 22:44:37 ++ awk -F/ '{print $2}' 22:44:37 + tox_env=docs-linkcheck 22:44:37 + cp -r .tox/docs-linkcheck/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/docs-linkcheck 22:44:37 + for i in .tox/*/log 22:44:37 ++ echo .tox/docs/log 22:44:37 ++ awk -F/ '{print $2}' 22:44:37 + tox_env=docs 22:44:37 + cp -r .tox/docs/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/docs 22:44:37 + for i in .tox/*/log 22:44:37 ++ echo .tox/pre-commit/log 22:44:37 ++ awk -F/ '{print $2}' 22:44:37 + tox_env=pre-commit 22:44:37 + cp -r .tox/pre-commit/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/pre-commit 22:44:37 + for i in .tox/*/log 22:44:37 ++ echo .tox/pylint/log 22:44:37 ++ awk -F/ '{print $2}' 22:44:37 + tox_env=pylint 22:44:37 + cp -r .tox/pylint/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/pylint 22:44:37 + for i in .tox/*/log 22:44:37 ++ echo .tox/sims/log 22:44:37 ++ awk -F/ '{print $2}' 22:44:37 + tox_env=sims 22:44:37 + cp -r .tox/sims/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/sims 22:44:37 + for i in .tox/*/log 22:44:37 ++ echo .tox/tests121/log 22:44:37 ++ awk -F/ '{print $2}' 22:44:37 + tox_env=tests121 22:44:37 + cp -r .tox/tests121/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests121 22:44:37 + for i in .tox/*/log 22:44:37 ++ echo .tox/tests200/log 22:44:37 ++ awk -F/ '{print $2}' 22:44:37 + tox_env=tests200 22:44:37 + cp -r .tox/tests200/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests200 22:44:37 + for i in .tox/*/log 22:44:37 ++ echo .tox/tests221/log 22:44:37 ++ awk -F/ '{print $2}' 22:44:37 + tox_env=tests221 22:44:37 + cp -r .tox/tests221/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests221 22:44:37 + for i in .tox/*/log 22:44:37 ++ echo .tox/tests71/log 22:44:37 ++ awk -F/ '{print $2}' 22:44:37 + tox_env=tests71 22:44:37 + cp -r .tox/tests71/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests71 22:44:37 + for i in .tox/*/log 22:44:37 ++ echo .tox/testsPCE/log 22:44:37 ++ awk -F/ '{print $2}' 22:44:37 + tox_env=testsPCE 22:44:37 + cp -r .tox/testsPCE/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/testsPCE 22:44:37 + for i in .tox/*/log 22:44:37 ++ awk -F/ '{print $2}' 22:44:37 ++ echo .tox/tests_hybrid/log 22:44:37 + tox_env=tests_hybrid 22:44:37 + cp -r .tox/tests_hybrid/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests_hybrid 22:44:37 + for i in .tox/*/log 22:44:37 ++ echo .tox/tests_tapi/log 22:44:37 ++ awk -F/ '{print $2}' 22:44:37 + tox_env=tests_tapi 22:44:37 + cp -r .tox/tests_tapi/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests_tapi 22:44:37 + DOC_DIR=docs/_build/html 22:44:37 + [[ -d docs/_build/html ]] 22:44:37 + echo '---> Archiving generated docs' 22:44:37 ---> Archiving generated docs 22:44:37 + mv docs/_build/html /w/workspace/transportpce-tox-verify-transportpce-master/archives/docs 22:44:37 + echo '---> tox-run.sh ends' 22:44:37 ---> tox-run.sh ends 22:44:37 + test 1 -eq 0 22:44:37 + exit 1 22:44:37 ++ '[' 1 = 1 ']' 22:44:37 ++ '[' -x /usr/bin/clear_console ']' 22:44:37 ++ /usr/bin/clear_console -q 22:44:37 Build step 'Execute shell' marked build as failure 22:44:37 $ ssh-agent -k 22:44:37 unset SSH_AUTH_SOCK; 22:44:37 unset SSH_AGENT_PID; 22:44:37 echo Agent pid 1577 killed; 22:44:38 [ssh-agent] Stopped. 22:44:38 [PostBuildScript] - [INFO] Executing post build scripts. 22:44:38 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins11637193937962911265.sh 22:44:38 ---> sysstat.sh 22:44:38 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins2383046107398331688.sh 22:44:38 ---> package-listing.sh 22:44:38 ++ facter osfamily 22:44:38 ++ tr '[:upper:]' '[:lower:]' 22:44:38 + OS_FAMILY=debian 22:44:38 + workspace=/w/workspace/transportpce-tox-verify-transportpce-master 22:44:38 + START_PACKAGES=/tmp/packages_start.txt 22:44:38 + END_PACKAGES=/tmp/packages_end.txt 22:44:38 + DIFF_PACKAGES=/tmp/packages_diff.txt 22:44:38 + PACKAGES=/tmp/packages_start.txt 22:44:38 + '[' /w/workspace/transportpce-tox-verify-transportpce-master ']' 22:44:38 + PACKAGES=/tmp/packages_end.txt 22:44:38 + case "${OS_FAMILY}" in 22:44:38 + dpkg -l 22:44:38 + grep '^ii' 22:44:38 + '[' -f /tmp/packages_start.txt ']' 22:44:38 + '[' -f /tmp/packages_end.txt ']' 22:44:38 + diff /tmp/packages_start.txt /tmp/packages_end.txt 22:44:38 + '[' /w/workspace/transportpce-tox-verify-transportpce-master ']' 22:44:38 + mkdir -p /w/workspace/transportpce-tox-verify-transportpce-master/archives/ 22:44:38 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/transportpce-tox-verify-transportpce-master/archives/ 22:44:38 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins11393373638896197148.sh 22:44:38 ---> capture-instance-metadata.sh 22:44:39 Setup pyenv: 22:44:39 system 22:44:39 3.8.20 22:44:39 3.9.20 22:44:39 3.10.15 22:44:39 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 22:44:39 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-dYDK from file:/tmp/.os_lf_venv 22:44:39 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 22:44:39 lf-activate-venv(): INFO: Attempting to install with network-safe options... 22:44:41 lf-activate-venv(): INFO: Base packages installed successfully 22:44:41 lf-activate-venv(): INFO: Installing additional packages: lftools 22:44:56 lf-activate-venv(): INFO: Adding /tmp/venv-dYDK/bin to PATH 22:44:56 INFO: Running in OpenStack, capturing instance metadata 22:44:57 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins12143847987672641142.sh 22:44:57 provisioning config files... 22:44:57 Could not find credentials [logs] for transportpce-tox-verify-transportpce-master #4514 22:44:57 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/transportpce-tox-verify-transportpce-master@tmp/config13141367082420387480tmp 22:44:57 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[odl-logs-s3-cloudfront-index] 22:44:57 Run condition [Regular expression match] enabling perform for step [Provide Configuration files] 22:44:57 provisioning config files... 22:44:57 copy managed file [jenkins-s3-log-ship] to file:/home/jenkins/.aws/credentials 22:44:57 [EnvInject] - Injecting environment variables from a build step. 22:44:57 [EnvInject] - Injecting as environment variables the properties content 22:44:57 SERVER_ID=logs 22:44:57 22:44:57 [EnvInject] - Variables injected successfully. 22:44:57 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins2202014100930997968.sh 22:44:57 ---> create-netrc.sh 22:44:57 WARN: Log server credential not found. 22:44:57 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins600691196874140848.sh 22:44:57 ---> python-tools-install.sh 22:44:57 Setup pyenv: 22:44:57 system 22:44:57 3.8.20 22:44:57 3.9.20 22:44:57 3.10.15 22:44:57 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 22:44:57 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-dYDK from file:/tmp/.os_lf_venv 22:44:57 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 22:44:57 lf-activate-venv(): INFO: Attempting to install with network-safe options... 22:44:59 lf-activate-venv(): INFO: Base packages installed successfully 22:44:59 lf-activate-venv(): INFO: Installing additional packages: lftools 22:45:08 lf-activate-venv(): INFO: Adding /tmp/venv-dYDK/bin to PATH 22:45:08 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins12923631708909589218.sh 22:45:08 ---> sudo-logs.sh 22:45:08 Archiving 'sudo' log.. 22:45:08 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins18196029585499786216.sh 22:45:08 ---> job-cost.sh 22:45:08 INFO: Activating Python virtual environment... 22:45:08 Setup pyenv: 22:45:09 system 22:45:09 3.8.20 22:45:09 3.9.20 22:45:09 3.10.15 22:45:09 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 22:45:09 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-dYDK from file:/tmp/.os_lf_venv 22:45:09 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 22:45:09 lf-activate-venv(): INFO: Attempting to install with network-safe options... 22:45:10 lf-activate-venv(): INFO: Base packages installed successfully 22:45:10 lf-activate-venv(): INFO: Installing additional packages: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 22:45:17 lf-activate-venv(): INFO: Adding /tmp/venv-dYDK/bin to PATH 22:45:17 INFO: No stack-cost file found 22:45:17 INFO: Instance uptime: 7076s 22:45:17 INFO: Fetching instance metadata (attempt 1 of 3)... 22:45:17 DEBUG: URL: http://169.254.169.254/latest/meta-data/instance-type 22:45:17 INFO: Successfully fetched instance metadata 22:45:17 INFO: Instance type: v3-standard-4 22:45:17 INFO: Retrieving pricing info for: v3-standard-4 22:45:17 INFO: Fetching Vexxhost pricing API (attempt 1 of 3)... 22:45:17 DEBUG: URL: https://pricing.vexxhost.net/v1/pricing/v3-standard-4/cost?seconds=7076 22:45:17 INFO: Successfully fetched Vexxhost pricing API 22:45:17 INFO: Retrieved cost: 0.22 22:45:17 INFO: Retrieved resource: v3-standard-4 22:45:17 INFO: Creating archive directory: /w/workspace/transportpce-tox-verify-transportpce-master/archives/cost 22:45:17 INFO: Archiving costs to: /w/workspace/transportpce-tox-verify-transportpce-master/archives/cost.csv 22:45:17 INFO: Successfully archived job cost data 22:45:17 DEBUG: Cost data: transportpce-tox-verify-transportpce-master,4514,2026-03-05 22:45:17,v3-standard-4,7076,0.22,0.00,FAILURE 22:45:17 [transportpce-tox-verify-transportpce-master] $ /bin/bash -l /tmp/jenkins1581577421906554097.sh 22:45:17 ---> logs-deploy.sh 22:45:17 Setup pyenv: 22:45:17 system 22:45:17 3.8.20 22:45:17 3.9.20 22:45:17 3.10.15 22:45:17 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 22:45:18 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-dYDK from file:/tmp/.os_lf_venv 22:45:18 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 22:45:18 lf-activate-venv(): INFO: Attempting to install with network-safe options... 22:45:19 lf-activate-venv(): INFO: Base packages installed successfully 22:45:19 lf-activate-venv(): INFO: Installing additional packages: lftools urllib3~=1.26.15 22:45:28 lf-activate-venv(): INFO: Adding /tmp/venv-dYDK/bin to PATH 22:45:28 WARNING: Nexus logging server not set 22:45:28 INFO: S3 path logs/releng/vex-yul-odl-jenkins-1/transportpce-tox-verify-transportpce-master/4514/ 22:45:28 INFO: archiving logs to S3 22:45:29 /tmp/venv-dYDK/lib/python3.11/site-packages/requests/__init__.py:113: RequestsDependencyWarning: urllib3 (1.26.20) or chardet (7.0.1)/charset_normalizer (3.4.4) doesn't match a supported version! 22:45:29 warnings.warn( 22:45:30 ---> uname -a: 22:45:30 Linux prd-ubuntu2204-docker-4c-16g-81965 5.15.0-171-generic #181-Ubuntu SMP Fri Feb 6 22:44:50 UTC 2026 x86_64 x86_64 x86_64 GNU/Linux 22:45:30 22:45:30 22:45:30 ---> lscpu: 22:45:30 Architecture: x86_64 22:45:30 CPU op-mode(s): 32-bit, 64-bit 22:45:30 Address sizes: 40 bits physical, 48 bits virtual 22:45:30 Byte Order: Little Endian 22:45:30 CPU(s): 4 22:45:30 On-line CPU(s) list: 0-3 22:45:30 Vendor ID: AuthenticAMD 22:45:30 Model name: AMD EPYC-Rome Processor 22:45:30 CPU family: 23 22:45:30 Model: 49 22:45:30 Thread(s) per core: 1 22:45:30 Core(s) per socket: 1 22:45:30 Socket(s): 4 22:45:30 Stepping: 0 22:45:30 BogoMIPS: 5599.94 22:45:30 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr wbnoinvd arat npt nrip_save umip rdpid arch_capabilities 22:45:30 Virtualization: AMD-V 22:45:30 Hypervisor vendor: KVM 22:45:30 Virtualization type: full 22:45:30 L1d cache: 128 KiB (4 instances) 22:45:30 L1i cache: 128 KiB (4 instances) 22:45:30 L2 cache: 2 MiB (4 instances) 22:45:30 L3 cache: 64 MiB (4 instances) 22:45:30 NUMA node(s): 1 22:45:30 NUMA node0 CPU(s): 0-3 22:45:30 Vulnerability Gather data sampling: Not affected 22:45:30 Vulnerability Indirect target selection: Not affected 22:45:30 Vulnerability Itlb multihit: Not affected 22:45:30 Vulnerability L1tf: Not affected 22:45:30 Vulnerability Mds: Not affected 22:45:30 Vulnerability Meltdown: Not affected 22:45:30 Vulnerability Mmio stale data: Not affected 22:45:30 Vulnerability Reg file data sampling: Not affected 22:45:30 Vulnerability Retbleed: Mitigation; untrained return thunk; SMT disabled 22:45:30 Vulnerability Spec rstack overflow: Mitigation; SMT disabled 22:45:30 Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp 22:45:30 Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization 22:45:30 Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected 22:45:30 Vulnerability Srbds: Not affected 22:45:30 Vulnerability Tsa: Not affected 22:45:30 Vulnerability Tsx async abort: Not affected 22:45:30 Vulnerability Vmscape: Not affected 22:45:30 22:45:30 22:45:30 ---> nproc: 22:45:30 4 22:45:30 22:45:30 22:45:30 ---> df -h: 22:45:30 Filesystem Size Used Avail Use% Mounted on 22:45:30 tmpfs 1.6G 1.1M 1.6G 1% /run 22:45:30 /dev/vda1 78G 18G 61G 23% / 22:45:30 tmpfs 7.9G 0 7.9G 0% /dev/shm 22:45:30 tmpfs 5.0M 0 5.0M 0% /run/lock 22:45:30 /dev/vda15 105M 6.1M 99M 6% /boot/efi 22:45:30 tmpfs 1.6G 4.0K 1.6G 1% /run/user/1001 22:45:30 22:45:30 22:45:30 ---> free -m: 22:45:30 total used free shared buff/cache available 22:45:30 Mem: 15989 691 10049 4 5248 14955 22:45:30 Swap: 1023 0 1023 22:45:30 22:45:30 22:45:30 ---> ip addr: 22:45:30 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 22:45:30 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 22:45:30 inet 127.0.0.1/8 scope host lo 22:45:30 valid_lft forever preferred_lft forever 22:45:30 inet6 ::1/128 scope host 22:45:30 valid_lft forever preferred_lft forever 22:45:30 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 22:45:30 link/ether fa:16:3e:2e:e1:fb brd ff:ff:ff:ff:ff:ff 22:45:30 altname enp0s3 22:45:30 inet 10.30.171.119/23 metric 100 brd 10.30.171.255 scope global dynamic ens3 22:45:30 valid_lft 79318sec preferred_lft 79318sec 22:45:30 inet6 fe80::f816:3eff:fe2e:e1fb/64 scope link 22:45:30 valid_lft forever preferred_lft forever 22:45:30 3: docker0: mtu 1458 qdisc noqueue state DOWN group default 22:45:30 link/ether e6:c1:a0:36:73:d7 brd ff:ff:ff:ff:ff:ff 22:45:30 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 22:45:30 valid_lft forever preferred_lft forever 22:45:30 22:45:30 22:45:30 ---> sar -b -r -n DEV: 22:45:30 Linux 5.15.0-171-generic (prd-ubuntu2204-docker-4c-16g-81965) 03/05/26 _x86_64_ (4 CPU) 22:45:30 22:45:30 20:47:32 LINUX RESTART (4 CPU) 22:45:30 22:45:30 20:50:04 tps rtps wtps dtps bread/s bwrtn/s bdscd/s 22:45:30 21:00:07 124.61 4.47 114.89 5.25 441.39 35629.72 25878.56 22:45:30 21:10:02 7.65 1.86 5.56 0.23 19.31 139.32 937.37 22:45:30 21:20:06 9.94 0.05 9.31 0.58 1.46 307.33 1308.77 22:45:30 21:30:07 11.14 0.62 10.05 0.47 55.81 549.86 369.17 22:45:30 21:40:01 16.56 0.01 15.86 0.69 0.57 504.83 319.43 22:45:30 21:50:07 5.47 0.00 5.29 0.17 0.29 131.36 69.06 22:45:30 22:00:07 6.18 0.00 5.96 0.22 0.47 161.06 566.65 22:45:30 22:10:01 3.68 0.01 3.53 0.15 0.75 121.15 29.88 22:45:30 22:20:07 20.04 0.05 4.59 15.40 0.98 145.20 210891.66 22:45:30 22:30:07 9.40 0.05 8.94 0.41 1.44 550.60 522.77 22:45:30 22:40:01 7.95 0.04 7.60 0.32 1.49 293.80 133.17 22:45:30 Average: 20.31 0.65 17.47 2.19 47.86 3521.34 22117.70 22:45:30 22:45:30 20:50:04 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 22:45:30 21:00:07 198816 4298024 11629204 71.03 266044 3737920 13728256 78.80 2058760 13453172 580 22:45:30 21:10:02 7050760 11106856 4825328 29.47 266904 3697440 5515224 31.66 2092272 6577052 240 22:45:30 21:20:06 5313112 9389796 6542176 39.96 268872 3716092 8288164 47.57 2112812 8307724 412 22:45:30 21:30:07 7523040 11727508 4205108 25.68 274540 3836968 5048160 28.98 2152436 6063152 104 22:45:30 21:40:01 7243424 11523684 4408724 26.93 276760 3910476 5172092 29.69 2158424 6321708 348 22:45:30 21:50:07 7137584 11433884 4498348 27.47 277564 3925712 5200520 29.85 2159684 6435064 168 22:45:30 22:00:07 10481176 14815044 1119128 6.84 278304 3962620 1849644 10.62 2162496 3102236 18196 22:45:30 22:10:01 5555372 9891704 6039740 36.89 279052 3964344 6766240 38.84 2163408 8009388 264 22:45:30 22:20:07 5549224 9912428 6019080 36.76 279940 3990276 6741340 38.70 2171788 8008708 568 22:45:30 22:30:07 10807836 15296972 636348 3.89 283992 4106752 1348132 7.74 2190784 2694672 148 22:45:30 22:40:01 6148308 10689712 5241696 32.01 284784 4158064 5936380 34.07 2210144 7363996 596 22:45:30 Average: 6637150 10916874 5014989 30.63 276069 3909697 5963105 34.23 2148455 6939716 1966 22:45:30 22:45:30 20:50:04 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 22:45:30 21:00:07 lo 21.07 21.07 17.02 17.02 0.00 0.00 0.00 0.00 22:45:30 21:00:07 ens3 89.56 69.67 1386.28 7.40 0.00 0.00 0.00 0.00 22:45:30 21:00:07 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:45:30 21:10:02 lo 10.77 10.77 6.09 6.09 0.00 0.00 0.00 0.00 22:45:30 21:10:02 ens3 1.16 0.84 0.25 1.14 0.00 0.00 0.00 0.00 22:45:30 21:10:02 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:45:30 21:20:06 lo 12.56 12.56 7.41 7.41 0.00 0.00 0.00 0.00 22:45:30 21:20:06 ens3 0.61 0.59 0.13 0.11 0.00 0.00 0.00 0.00 22:45:30 21:20:06 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:45:30 21:30:07 lo 13.18 13.18 6.00 6.00 0.00 0.00 0.00 0.00 22:45:30 21:30:07 ens3 0.67 0.62 0.20 0.16 0.00 0.00 0.00 0.00 22:45:30 21:30:07 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:45:30 21:40:01 lo 15.01 15.01 8.00 8.00 0.00 0.00 0.00 0.00 22:45:30 21:40:01 ens3 0.76 0.69 0.17 0.14 0.00 0.00 0.00 0.00 22:45:30 21:40:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:45:30 21:50:07 lo 13.28 13.28 6.59 6.59 0.00 0.00 0.00 0.00 22:45:30 21:50:07 ens3 0.49 0.42 0.11 0.08 0.00 0.00 0.00 0.00 22:45:30 21:50:07 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:45:30 22:00:07 lo 28.54 28.54 10.07 10.07 0.00 0.00 0.00 0.00 22:45:30 22:00:07 ens3 0.63 0.52 0.14 0.93 0.00 0.00 0.00 0.00 22:45:30 22:00:07 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:45:30 22:10:01 lo 16.93 16.93 8.71 8.71 0.00 0.00 0.00 0.00 22:45:30 22:10:01 ens3 0.57 0.48 0.15 0.11 0.00 0.00 0.00 0.00 22:45:30 22:10:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:45:30 22:20:07 lo 15.85 15.85 9.63 9.63 0.00 0.00 0.00 0.00 22:45:30 22:20:07 ens3 0.61 0.53 0.15 0.12 0.00 0.00 0.00 0.00 22:45:30 22:20:07 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:45:30 22:30:07 lo 12.93 12.93 7.27 7.27 0.00 0.00 0.00 0.00 22:45:30 22:30:07 ens3 0.97 0.71 0.31 0.24 0.00 0.00 0.00 0.00 22:45:30 22:30:07 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:45:30 22:40:01 lo 27.58 27.58 11.57 11.57 0.00 0.00 0.00 0.00 22:45:30 22:40:01 ens3 0.76 0.65 0.16 0.13 0.00 0.00 0.00 0.00 22:45:30 22:40:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:45:30 Average: lo 17.06 17.06 8.94 8.94 0.00 0.00 0.00 0.00 22:45:30 Average: ens3 8.84 6.92 126.91 0.96 0.00 0.00 0.00 0.00 22:45:30 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:45:30 22:45:30 22:45:30 ---> sar -P ALL: 22:45:30 Linux 5.15.0-171-generic (prd-ubuntu2204-docker-4c-16g-81965) 03/05/26 _x86_64_ (4 CPU) 22:45:30 22:45:30 20:47:32 LINUX RESTART (4 CPU) 22:45:30 22:45:30 20:50:04 CPU %user %nice %system %iowait %steal %idle 22:45:30 21:00:07 all 63.10 0.00 2.85 2.66 0.10 31.30 22:45:30 21:00:07 0 64.63 0.00 2.96 2.38 0.09 29.94 22:45:30 21:00:07 1 64.01 0.00 2.71 2.50 0.09 30.69 22:45:30 21:00:07 2 62.81 0.00 2.76 3.02 0.09 31.32 22:45:30 21:00:07 3 60.94 0.00 2.97 2.73 0.10 33.27 22:45:30 21:10:02 all 10.04 0.00 0.50 0.10 0.09 89.26 22:45:30 21:10:02 0 10.45 0.00 0.51 0.03 0.09 88.93 22:45:30 21:10:02 1 9.78 0.00 0.55 0.20 0.10 89.37 22:45:30 21:10:02 2 10.24 0.00 0.51 0.10 0.09 89.06 22:45:30 21:10:02 3 9.70 0.00 0.44 0.06 0.10 89.70 22:45:30 21:20:06 all 23.15 0.00 0.90 0.05 0.09 75.81 22:45:30 21:20:06 0 22.14 0.00 0.89 0.01 0.08 76.87 22:45:30 21:20:06 1 24.00 0.00 0.94 0.04 0.09 74.93 22:45:30 21:20:06 2 23.01 0.00 0.84 0.12 0.09 75.93 22:45:30 21:20:06 3 23.44 0.00 0.94 0.02 0.09 75.52 22:45:30 21:30:07 all 17.26 0.00 0.77 0.06 0.09 81.82 22:45:30 21:30:07 0 17.30 0.00 0.79 0.13 0.09 81.69 22:45:30 21:30:07 1 17.90 0.00 0.86 0.04 0.10 81.10 22:45:30 21:30:07 2 17.00 0.00 0.74 0.04 0.09 82.13 22:45:30 21:30:07 3 16.84 0.00 0.69 0.02 0.10 82.36 22:45:30 21:40:01 all 26.90 0.00 0.97 0.06 0.09 71.98 22:45:30 21:40:01 0 27.19 0.00 1.08 0.14 0.09 71.51 22:45:30 21:40:01 1 26.85 0.00 1.07 0.03 0.09 71.96 22:45:30 21:40:01 2 26.68 0.00 0.78 0.03 0.09 72.42 22:45:30 21:40:01 3 26.89 0.00 0.95 0.04 0.09 72.03 22:45:30 21:50:07 all 8.28 0.00 0.45 0.03 0.09 91.15 22:45:30 21:50:07 0 8.21 0.00 0.57 0.02 0.09 91.11 22:45:30 21:50:07 1 7.69 0.00 0.40 0.04 0.09 91.78 22:45:30 21:50:07 2 8.40 0.00 0.40 0.02 0.09 91.08 22:45:30 21:50:07 3 8.80 0.00 0.45 0.02 0.09 90.63 22:45:30 22:00:07 all 9.75 0.00 0.52 0.02 0.09 89.62 22:45:30 22:00:07 0 9.59 0.00 0.48 0.03 0.08 89.82 22:45:30 22:00:07 1 9.99 0.00 0.57 0.00 0.09 89.34 22:45:30 22:00:07 2 9.88 0.00 0.50 0.06 0.09 89.47 22:45:30 22:00:07 3 9.52 0.00 0.54 0.01 0.09 89.84 22:45:30 22:10:01 all 8.74 0.00 0.33 0.02 0.09 90.83 22:45:30 22:10:01 0 8.69 0.00 0.28 0.05 0.08 90.90 22:45:30 22:10:01 1 8.55 0.00 0.32 0.01 0.09 91.03 22:45:30 22:10:01 2 8.78 0.00 0.35 0.01 0.09 90.76 22:45:30 22:10:01 3 8.94 0.00 0.36 0.00 0.09 90.61 22:45:30 22:20:07 all 9.10 0.00 0.37 0.05 0.09 90.38 22:45:30 22:20:07 0 8.95 0.00 0.43 0.09 0.09 90.44 22:45:30 22:20:07 1 9.25 0.00 0.38 0.02 0.09 90.25 22:45:30 22:20:07 2 9.12 0.00 0.33 0.02 0.09 90.44 22:45:30 22:20:07 3 9.08 0.00 0.36 0.04 0.10 90.41 22:45:30 22:30:07 all 12.78 0.00 0.60 0.04 0.09 86.49 22:45:30 22:30:07 0 12.62 0.00 0.55 0.01 0.09 86.73 22:45:30 22:30:07 1 13.00 0.00 0.64 0.03 0.10 86.23 22:45:30 22:30:07 2 12.89 0.00 0.67 0.10 0.10 86.24 22:45:30 22:30:07 3 12.60 0.00 0.55 0.01 0.09 86.76 22:45:30 22:40:01 all 14.95 0.00 0.61 0.04 0.09 84.31 22:45:30 22:40:01 0 14.61 0.00 0.63 0.03 0.10 84.63 22:45:30 22:40:01 1 14.55 0.00 0.56 0.02 0.10 84.77 22:45:30 22:40:01 2 15.63 0.00 0.71 0.09 0.09 83.48 22:45:30 22:40:01 3 14.99 0.00 0.54 0.01 0.09 84.37 22:45:30 22:45:30 Average: CPU %user %nice %system %iowait %steal %idle 22:45:30 Average: all 18.56 0.00 0.81 0.28 0.09 80.25 22:45:30 Average: 0 18.60 0.00 0.84 0.27 0.09 80.22 22:45:30 Average: 1 18.70 0.00 0.82 0.27 0.09 80.12 22:45:30 Average: 2 18.60 0.00 0.78 0.33 0.09 80.19 22:45:30 Average: 3 18.35 0.00 0.80 0.27 0.09 80.48 22:45:30 22:45:30 22:45:30