11:45:13 Triggered by Gerrit: https://git.opendaylight.org/gerrit/c/transportpce/+/116914 11:45:13 Running as SYSTEM 11:45:13 [EnvInject] - Loading node environment variables. 11:45:13 Building remotely on prd-ubuntu2204-docker-4c-16g-39075 (ubuntu2204-docker-4c-16g) in workspace /w/workspace/transportpce-tox-verify-transportpce-master 11:45:13 [ssh-agent] Looking for ssh-agent implementation... 11:45:13 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 11:45:13 $ ssh-agent 11:45:13 SSH_AUTH_SOCK=/tmp/ssh-XXXXXXMt82Zf/agent.16161 11:45:13 SSH_AGENT_PID=16163 11:45:13 [ssh-agent] Started. 11:45:13 Running ssh-add (command line suppressed) 11:45:13 Identity added: /w/workspace/transportpce-tox-verify-transportpce-master@tmp/private_key_10854974553742371633.key (/w/workspace/transportpce-tox-verify-transportpce-master@tmp/private_key_10854974553742371633.key) 11:45:13 [ssh-agent] Using credentials jenkins (jenkins-ssh) 11:45:13 The recommended git tool is: NONE 11:45:15 using credential jenkins-ssh 11:45:15 Wiping out workspace first. 11:45:15 Cloning the remote Git repository 11:45:15 Cloning repository git://devvexx.opendaylight.org/mirror/transportpce 11:45:15 > git init /w/workspace/transportpce-tox-verify-transportpce-master # timeout=10 11:45:15 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/transportpce 11:45:15 > git --version # timeout=10 11:45:15 > git --version # 'git version 2.34.1' 11:45:15 using GIT_SSH to set credentials jenkins-ssh 11:45:15 Verifying host key using known hosts file 11:45:15 You're using 'Known hosts file' strategy to verify ssh host keys, but your known_hosts file does not exist, please go to 'Manage Jenkins' -> 'Security' -> 'Git Host Key Verification Configuration' and configure host key verification. 11:45:15 > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/transportpce +refs/heads/*:refs/remotes/origin/* # timeout=10 11:45:19 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/transportpce # timeout=10 11:45:19 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 11:45:19 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/transportpce # timeout=10 11:45:19 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/transportpce 11:45:19 using GIT_SSH to set credentials jenkins-ssh 11:45:19 Verifying host key using known hosts file 11:45:19 You're using 'Known hosts file' strategy to verify ssh host keys, but your known_hosts file does not exist, please go to 'Manage Jenkins' -> 'Security' -> 'Git Host Key Verification Configuration' and configure host key verification. 11:45:19 > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/transportpce refs/changes/14/116914/4 # timeout=10 11:45:19 > git rev-parse 0af22db89ed8196547379eaaa7c5b07ddd715ef5^{commit} # timeout=10 11:45:19 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script 11:45:19 Checking out Revision 0af22db89ed8196547379eaaa7c5b07ddd715ef5 (refs/changes/14/116914/4) 11:45:19 > git config core.sparsecheckout # timeout=10 11:45:19 > git checkout -f 0af22db89ed8196547379eaaa7c5b07ddd715ef5 # timeout=10 11:45:23 Commit message: "Added available frequencies when occupied frequencies exists" 11:45:23 > git rev-parse FETCH_HEAD^{commit} # timeout=10 11:45:23 > git rev-list --no-walk 1693b431c5ad09ee6b8602c3261fab8f96c7c8d8 # timeout=10 11:45:23 > git remote # timeout=10 11:45:23 > git submodule init # timeout=10 11:45:23 > git submodule sync # timeout=10 11:45:23 > git config --get remote.origin.url # timeout=10 11:45:23 > git submodule init # timeout=10 11:45:23 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 11:45:23 ERROR: No submodules found. 11:45:23 provisioning config files... 11:45:23 copy managed file [npmrc] to file:/home/jenkins/.npmrc 11:45:23 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 11:45:23 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins3552353099997298957.sh 11:45:23 ---> python-tools-install.sh 11:45:23 Setup pyenv: 11:45:23 * system (set by /opt/pyenv/version) 11:45:23 * 3.8.20 (set by /opt/pyenv/version) 11:45:23 * 3.9.20 (set by /opt/pyenv/version) 11:45:23 * 3.10.15 (set by /opt/pyenv/version) 11:45:23 * 3.11.10 (set by /opt/pyenv/version) 11:45:28 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-WFYk 11:45:28 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 11:45:32 lf-activate-venv(): INFO: Installing: lftools 11:45:55 lf-activate-venv(): INFO: Adding /tmp/venv-WFYk/bin to PATH 11:45:55 Generating Requirements File 11:46:16 Python 3.11.10 11:46:16 pip 25.1.1 from /tmp/venv-WFYk/lib/python3.11/site-packages/pip (python 3.11) 11:46:17 appdirs==1.4.4 11:46:17 argcomplete==3.6.2 11:46:17 aspy.yaml==1.3.0 11:46:17 attrs==25.3.0 11:46:17 autopage==0.5.2 11:46:17 beautifulsoup4==4.13.4 11:46:17 boto3==1.39.3 11:46:17 botocore==1.39.3 11:46:17 bs4==0.0.2 11:46:17 cachetools==5.5.2 11:46:17 certifi==2025.6.15 11:46:17 cffi==1.17.1 11:46:17 cfgv==3.4.0 11:46:17 chardet==5.2.0 11:46:17 charset-normalizer==3.4.2 11:46:17 click==8.2.1 11:46:17 cliff==4.10.0 11:46:17 cmd2==2.7.0 11:46:17 cryptography==3.3.2 11:46:17 debtcollector==3.0.0 11:46:17 decorator==5.2.1 11:46:17 defusedxml==0.7.1 11:46:17 Deprecated==1.2.18 11:46:17 distlib==0.3.9 11:46:17 dnspython==2.7.0 11:46:17 docker==7.1.0 11:46:17 dogpile.cache==1.4.0 11:46:17 durationpy==0.10 11:46:17 email_validator==2.2.0 11:46:17 filelock==3.18.0 11:46:17 future==1.0.0 11:46:17 gitdb==4.0.12 11:46:17 GitPython==3.1.44 11:46:17 google-auth==2.40.3 11:46:17 httplib2==0.22.0 11:46:17 identify==2.6.12 11:46:17 idna==3.10 11:46:17 importlib-resources==1.5.0 11:46:17 iso8601==2.1.0 11:46:17 Jinja2==3.1.6 11:46:17 jmespath==1.0.1 11:46:17 jsonpatch==1.33 11:46:17 jsonpointer==3.0.0 11:46:17 jsonschema==4.24.0 11:46:17 jsonschema-specifications==2025.4.1 11:46:17 keystoneauth1==5.11.1 11:46:17 kubernetes==33.1.0 11:46:17 lftools==0.37.13 11:46:17 lxml==6.0.0 11:46:17 markdown-it-py==3.0.0 11:46:17 MarkupSafe==3.0.2 11:46:17 mdurl==0.1.2 11:46:17 msgpack==1.1.1 11:46:17 multi_key_dict==2.0.3 11:46:17 munch==4.0.0 11:46:17 netaddr==1.3.0 11:46:17 niet==1.4.2 11:46:17 nodeenv==1.9.1 11:46:17 oauth2client==4.1.3 11:46:17 oauthlib==3.3.1 11:46:17 openstacksdk==4.6.0 11:46:17 os-client-config==2.1.0 11:46:17 os-service-types==1.7.0 11:46:17 osc-lib==4.0.2 11:46:17 oslo.config==9.8.0 11:46:17 oslo.context==6.0.0 11:46:17 oslo.i18n==6.5.1 11:46:17 oslo.log==7.1.0 11:46:17 oslo.serialization==5.7.0 11:46:17 oslo.utils==9.0.0 11:46:17 packaging==25.0 11:46:17 pbr==6.1.1 11:46:17 platformdirs==4.3.8 11:46:17 prettytable==3.16.0 11:46:17 psutil==7.0.0 11:46:17 pyasn1==0.6.1 11:46:17 pyasn1_modules==0.4.2 11:46:17 pycparser==2.22 11:46:17 pygerrit2==2.0.15 11:46:17 PyGithub==2.6.1 11:46:17 Pygments==2.19.2 11:46:17 PyJWT==2.10.1 11:46:17 PyNaCl==1.5.0 11:46:17 pyparsing==2.4.7 11:46:17 pyperclip==1.9.0 11:46:17 pyrsistent==0.20.0 11:46:17 python-cinderclient==9.7.0 11:46:17 python-dateutil==2.9.0.post0 11:46:17 python-heatclient==4.2.0 11:46:17 python-jenkins==1.8.2 11:46:17 python-keystoneclient==5.6.0 11:46:17 python-magnumclient==4.8.1 11:46:17 python-openstackclient==8.1.0 11:46:17 python-swiftclient==4.8.0 11:46:17 PyYAML==6.0.2 11:46:17 referencing==0.36.2 11:46:17 requests==2.32.4 11:46:17 requests-oauthlib==2.0.0 11:46:17 requestsexceptions==1.4.0 11:46:17 rfc3986==2.0.0 11:46:17 rich==14.0.0 11:46:17 rich-argparse==1.7.1 11:46:17 rpds-py==0.26.0 11:46:17 rsa==4.9.1 11:46:17 ruamel.yaml==0.18.14 11:46:17 ruamel.yaml.clib==0.2.12 11:46:17 s3transfer==0.13.0 11:46:17 simplejson==3.20.1 11:46:17 six==1.17.0 11:46:17 smmap==5.0.2 11:46:17 soupsieve==2.7 11:46:17 stevedore==5.4.1 11:46:17 tabulate==0.9.0 11:46:17 toml==0.10.2 11:46:17 tomlkit==0.13.3 11:46:17 tqdm==4.67.1 11:46:17 typing_extensions==4.14.0 11:46:17 tzdata==2025.2 11:46:17 urllib3==1.26.20 11:46:17 virtualenv==20.31.2 11:46:17 wcwidth==0.2.13 11:46:17 websocket-client==1.8.0 11:46:17 wrapt==1.17.2 11:46:17 xdg==6.0.0 11:46:17 xmltodict==0.14.2 11:46:17 yq==3.4.3 11:46:17 [EnvInject] - Injecting environment variables from a build step. 11:46:17 [EnvInject] - Injecting as environment variables the properties content 11:46:17 PYTHON=python3 11:46:17 11:46:17 [EnvInject] - Variables injected successfully. 11:46:17 [transportpce-tox-verify-transportpce-master] $ /bin/bash -l /tmp/jenkins5087513108148867714.sh 11:46:17 ---> tox-install.sh 11:46:17 + source /home/jenkins/lf-env.sh 11:46:17 + lf-activate-venv --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 11:46:17 ++ mktemp -d /tmp/venv-XXXX 11:46:17 + lf_venv=/tmp/venv-aIZ4 11:46:17 + local venv_file=/tmp/.os_lf_venv 11:46:17 + local python=python3 11:46:17 + local options 11:46:17 + local set_path=true 11:46:17 + local install_args= 11:46:17 ++ getopt -o np:v: -l no-path,system-site-packages,python:,venv-file: -n lf-activate-venv -- --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 11:46:17 + options=' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 11:46:17 + eval set -- ' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 11:46:17 ++ set -- --venv-file /tmp/.toxenv -- tox virtualenv urllib3~=1.26.15 11:46:17 + true 11:46:17 + case $1 in 11:46:17 + venv_file=/tmp/.toxenv 11:46:17 + shift 2 11:46:17 + true 11:46:17 + case $1 in 11:46:17 + shift 11:46:17 + break 11:46:17 + case $python in 11:46:17 + local pkg_list= 11:46:17 + [[ -d /opt/pyenv ]] 11:46:17 + echo 'Setup pyenv:' 11:46:17 Setup pyenv: 11:46:17 + export PYENV_ROOT=/opt/pyenv 11:46:17 + PYENV_ROOT=/opt/pyenv 11:46:17 + export PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 11:46:17 + PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 11:46:17 + pyenv versions 11:46:17 system 11:46:17 3.8.20 11:46:17 3.9.20 11:46:17 3.10.15 11:46:17 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 11:46:17 + command -v pyenv 11:46:17 ++ pyenv init - --no-rehash 11:46:17 + eval 'PATH="$(bash --norc -ec '\''IFS=:; paths=($PATH); 11:46:17 for i in ${!paths[@]}; do 11:46:17 if [[ ${paths[i]} == "'\'''\''/opt/pyenv/shims'\'''\''" ]]; then unset '\''\'\'''\''paths[i]'\''\'\'''\''; 11:46:17 fi; done; 11:46:17 echo "${paths[*]}"'\'')" 11:46:17 export PATH="/opt/pyenv/shims:${PATH}" 11:46:17 export PYENV_SHELL=bash 11:46:17 source '\''/opt/pyenv/libexec/../completions/pyenv.bash'\'' 11:46:17 pyenv() { 11:46:17 local command 11:46:17 command="${1:-}" 11:46:17 if [ "$#" -gt 0 ]; then 11:46:17 shift 11:46:17 fi 11:46:17 11:46:17 case "$command" in 11:46:17 rehash|shell) 11:46:17 eval "$(pyenv "sh-$command" "$@")" 11:46:17 ;; 11:46:17 *) 11:46:17 command pyenv "$command" "$@" 11:46:17 ;; 11:46:17 esac 11:46:17 }' 11:46:17 +++ bash --norc -ec 'IFS=:; paths=($PATH); 11:46:17 for i in ${!paths[@]}; do 11:46:17 if [[ ${paths[i]} == "/opt/pyenv/shims" ]]; then unset '\''paths[i]'\''; 11:46:17 fi; done; 11:46:17 echo "${paths[*]}"' 11:46:17 ++ PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 11:46:17 ++ export PATH=/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 11:46:17 ++ PATH=/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 11:46:17 ++ export PYENV_SHELL=bash 11:46:17 ++ PYENV_SHELL=bash 11:46:17 ++ source /opt/pyenv/libexec/../completions/pyenv.bash 11:46:17 +++ complete -F _pyenv pyenv 11:46:17 ++ lf-pyver python3 11:46:17 ++ local py_version_xy=python3 11:46:17 ++ local py_version_xyz= 11:46:17 ++ pyenv versions 11:46:17 ++ local command 11:46:17 ++ sed 's/^[ *]* //' 11:46:17 ++ command=versions 11:46:17 ++ '[' 1 -gt 0 ']' 11:46:17 ++ shift 11:46:17 ++ case "$command" in 11:46:17 ++ command pyenv versions 11:46:17 ++ grep -E '^[0-9.]*[0-9]$' 11:46:17 ++ awk '{ print $1 }' 11:46:17 ++ [[ ! -s /tmp/.pyenv_versions ]] 11:46:17 +++ grep '^3' /tmp/.pyenv_versions 11:46:17 +++ sort -V 11:46:17 +++ tail -n 1 11:46:17 ++ py_version_xyz=3.11.10 11:46:17 ++ [[ -z 3.11.10 ]] 11:46:17 ++ echo 3.11.10 11:46:17 ++ return 0 11:46:17 + pyenv local 3.11.10 11:46:17 + local command 11:46:17 + command=local 11:46:17 + '[' 2 -gt 0 ']' 11:46:17 + shift 11:46:17 + case "$command" in 11:46:17 + command pyenv local 3.11.10 11:46:17 + for arg in "$@" 11:46:17 + case $arg in 11:46:17 + pkg_list+='tox ' 11:46:17 + for arg in "$@" 11:46:17 + case $arg in 11:46:17 + pkg_list+='virtualenv ' 11:46:17 + for arg in "$@" 11:46:17 + case $arg in 11:46:17 + pkg_list+='urllib3~=1.26.15 ' 11:46:17 + [[ -f /tmp/.toxenv ]] 11:46:17 + [[ ! -f /tmp/.toxenv ]] 11:46:17 + [[ -n '' ]] 11:46:17 + python3 -m venv /tmp/venv-aIZ4 11:46:21 + echo 'lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-aIZ4' 11:46:21 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-aIZ4 11:46:21 + echo /tmp/venv-aIZ4 11:46:21 + echo 'lf-activate-venv(): INFO: Save venv in file: /tmp/.toxenv' 11:46:21 lf-activate-venv(): INFO: Save venv in file: /tmp/.toxenv 11:46:21 + /tmp/venv-aIZ4/bin/python3 -m pip install --upgrade --quiet pip 'setuptools<66' virtualenv 11:46:25 + [[ -z tox virtualenv urllib3~=1.26.15 ]] 11:46:25 + echo 'lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 ' 11:46:25 lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 11:46:25 + /tmp/venv-aIZ4/bin/python3 -m pip install --upgrade --quiet --upgrade-strategy eager tox virtualenv urllib3~=1.26.15 11:46:27 + type python3 11:46:27 + true 11:46:27 + echo 'lf-activate-venv(): INFO: Adding /tmp/venv-aIZ4/bin to PATH' 11:46:27 lf-activate-venv(): INFO: Adding /tmp/venv-aIZ4/bin to PATH 11:46:27 + PATH=/tmp/venv-aIZ4/bin:/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 11:46:27 + return 0 11:46:27 + python3 --version 11:46:27 Python 3.11.10 11:46:27 + python3 -m pip --version 11:46:27 pip 25.1.1 from /tmp/venv-aIZ4/lib/python3.11/site-packages/pip (python 3.11) 11:46:27 + python3 -m pip freeze 11:46:27 cachetools==6.1.0 11:46:27 chardet==5.2.0 11:46:27 colorama==0.4.6 11:46:27 distlib==0.3.9 11:46:27 filelock==3.18.0 11:46:27 packaging==25.0 11:46:27 platformdirs==4.3.8 11:46:27 pluggy==1.6.0 11:46:27 pyproject-api==1.9.1 11:46:27 tox==4.27.0 11:46:27 urllib3==1.26.20 11:46:27 virtualenv==20.31.2 11:46:27 [transportpce-tox-verify-transportpce-master] $ /bin/sh -xe /tmp/jenkins16608240651459886318.sh 11:46:27 [EnvInject] - Injecting environment variables from a build step. 11:46:27 [EnvInject] - Injecting as environment variables the properties content 11:46:27 PARALLEL=True 11:46:27 11:46:27 [EnvInject] - Variables injected successfully. 11:46:27 [transportpce-tox-verify-transportpce-master] $ /bin/bash -l /tmp/jenkins16766029595014517676.sh 11:46:27 ---> tox-run.sh 11:46:27 + PATH=/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 11:46:27 + ARCHIVE_TOX_DIR=/w/workspace/transportpce-tox-verify-transportpce-master/archives/tox 11:46:27 + ARCHIVE_DOC_DIR=/w/workspace/transportpce-tox-verify-transportpce-master/archives/docs 11:46:27 + mkdir -p /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox 11:46:27 + cd /w/workspace/transportpce-tox-verify-transportpce-master/. 11:46:27 + source /home/jenkins/lf-env.sh 11:46:27 + lf-activate-venv --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 11:46:27 ++ mktemp -d /tmp/venv-XXXX 11:46:27 + lf_venv=/tmp/venv-3gbt 11:46:27 + local venv_file=/tmp/.os_lf_venv 11:46:27 + local python=python3 11:46:27 + local options 11:46:27 + local set_path=true 11:46:27 + local install_args= 11:46:27 ++ getopt -o np:v: -l no-path,system-site-packages,python:,venv-file: -n lf-activate-venv -- --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 11:46:27 + options=' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 11:46:27 + eval set -- ' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 11:46:27 ++ set -- --venv-file /tmp/.toxenv -- tox virtualenv urllib3~=1.26.15 11:46:27 + true 11:46:27 + case $1 in 11:46:27 + venv_file=/tmp/.toxenv 11:46:27 + shift 2 11:46:27 + true 11:46:27 + case $1 in 11:46:27 + shift 11:46:27 + break 11:46:27 + case $python in 11:46:27 + local pkg_list= 11:46:27 + [[ -d /opt/pyenv ]] 11:46:27 + echo 'Setup pyenv:' 11:46:27 Setup pyenv: 11:46:27 + export PYENV_ROOT=/opt/pyenv 11:46:27 + PYENV_ROOT=/opt/pyenv 11:46:27 + export PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 11:46:27 + PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 11:46:27 + pyenv versions 11:46:27 system 11:46:27 3.8.20 11:46:27 3.9.20 11:46:27 3.10.15 11:46:27 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 11:46:27 + command -v pyenv 11:46:27 ++ pyenv init - --no-rehash 11:46:27 + eval 'PATH="$(bash --norc -ec '\''IFS=:; paths=($PATH); 11:46:27 for i in ${!paths[@]}; do 11:46:27 if [[ ${paths[i]} == "'\'''\''/opt/pyenv/shims'\'''\''" ]]; then unset '\''\'\'''\''paths[i]'\''\'\'''\''; 11:46:27 fi; done; 11:46:27 echo "${paths[*]}"'\'')" 11:46:27 export PATH="/opt/pyenv/shims:${PATH}" 11:46:27 export PYENV_SHELL=bash 11:46:27 source '\''/opt/pyenv/libexec/../completions/pyenv.bash'\'' 11:46:27 pyenv() { 11:46:27 local command 11:46:27 command="${1:-}" 11:46:27 if [ "$#" -gt 0 ]; then 11:46:27 shift 11:46:27 fi 11:46:27 11:46:27 case "$command" in 11:46:27 rehash|shell) 11:46:27 eval "$(pyenv "sh-$command" "$@")" 11:46:27 ;; 11:46:27 *) 11:46:27 command pyenv "$command" "$@" 11:46:27 ;; 11:46:27 esac 11:46:27 }' 11:46:27 +++ bash --norc -ec 'IFS=:; paths=($PATH); 11:46:27 for i in ${!paths[@]}; do 11:46:27 if [[ ${paths[i]} == "/opt/pyenv/shims" ]]; then unset '\''paths[i]'\''; 11:46:27 fi; done; 11:46:27 echo "${paths[*]}"' 11:46:27 ++ PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 11:46:27 ++ export PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 11:46:27 ++ PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 11:46:27 ++ export PYENV_SHELL=bash 11:46:27 ++ PYENV_SHELL=bash 11:46:27 ++ source /opt/pyenv/libexec/../completions/pyenv.bash 11:46:27 +++ complete -F _pyenv pyenv 11:46:27 ++ lf-pyver python3 11:46:27 ++ local py_version_xy=python3 11:46:27 ++ local py_version_xyz= 11:46:27 ++ pyenv versions 11:46:27 ++ local command 11:46:27 ++ command=versions 11:46:27 ++ '[' 1 -gt 0 ']' 11:46:27 ++ shift 11:46:27 ++ case "$command" in 11:46:27 ++ command pyenv versions 11:46:27 ++ sed 's/^[ *]* //' 11:46:27 ++ awk '{ print $1 }' 11:46:27 ++ grep -E '^[0-9.]*[0-9]$' 11:46:28 ++ [[ ! -s /tmp/.pyenv_versions ]] 11:46:28 +++ grep '^3' /tmp/.pyenv_versions 11:46:28 +++ sort -V 11:46:28 +++ tail -n 1 11:46:28 ++ py_version_xyz=3.11.10 11:46:28 ++ [[ -z 3.11.10 ]] 11:46:28 ++ echo 3.11.10 11:46:28 ++ return 0 11:46:28 + pyenv local 3.11.10 11:46:28 + local command 11:46:28 + command=local 11:46:28 + '[' 2 -gt 0 ']' 11:46:28 + shift 11:46:28 + case "$command" in 11:46:28 + command pyenv local 3.11.10 11:46:28 + for arg in "$@" 11:46:28 + case $arg in 11:46:28 + pkg_list+='tox ' 11:46:28 + for arg in "$@" 11:46:28 + case $arg in 11:46:28 + pkg_list+='virtualenv ' 11:46:28 + for arg in "$@" 11:46:28 + case $arg in 11:46:28 + pkg_list+='urllib3~=1.26.15 ' 11:46:28 + [[ -f /tmp/.toxenv ]] 11:46:28 ++ cat /tmp/.toxenv 11:46:28 + lf_venv=/tmp/venv-aIZ4 11:46:28 + echo 'lf-activate-venv(): INFO: Reuse venv:/tmp/venv-aIZ4 from' file:/tmp/.toxenv 11:46:28 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-aIZ4 from file:/tmp/.toxenv 11:46:28 + /tmp/venv-aIZ4/bin/python3 -m pip install --upgrade --quiet pip 'setuptools<66' virtualenv 11:46:29 + [[ -z tox virtualenv urllib3~=1.26.15 ]] 11:46:29 + echo 'lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 ' 11:46:29 lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 11:46:29 + /tmp/venv-aIZ4/bin/python3 -m pip install --upgrade --quiet --upgrade-strategy eager tox virtualenv urllib3~=1.26.15 11:46:30 + type python3 11:46:30 + true 11:46:30 + echo 'lf-activate-venv(): INFO: Adding /tmp/venv-aIZ4/bin to PATH' 11:46:30 lf-activate-venv(): INFO: Adding /tmp/venv-aIZ4/bin to PATH 11:46:30 + PATH=/tmp/venv-aIZ4/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 11:46:30 + return 0 11:46:30 + [[ -d /opt/pyenv ]] 11:46:30 + echo '---> Setting up pyenv' 11:46:30 ---> Setting up pyenv 11:46:30 + export PYENV_ROOT=/opt/pyenv 11:46:30 + PYENV_ROOT=/opt/pyenv 11:46:30 + export PATH=/opt/pyenv/bin:/tmp/venv-aIZ4/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 11:46:30 + PATH=/opt/pyenv/bin:/tmp/venv-aIZ4/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 11:46:30 ++ pwd 11:46:30 + PYTHONPATH=/w/workspace/transportpce-tox-verify-transportpce-master 11:46:30 + export PYTHONPATH 11:46:30 + export TOX_TESTENV_PASSENV=PYTHONPATH 11:46:30 + TOX_TESTENV_PASSENV=PYTHONPATH 11:46:30 + tox --version 11:46:30 4.27.0 from /tmp/venv-aIZ4/lib/python3.11/site-packages/tox/__init__.py 11:46:30 + PARALLEL=True 11:46:30 + TOX_OPTIONS_LIST= 11:46:30 + [[ -n '' ]] 11:46:30 + case ${PARALLEL,,} in 11:46:30 + TOX_OPTIONS_LIST=' --parallel auto --parallel-live' 11:46:30 + tox --parallel auto --parallel-live 11:46:30 + tee -a /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tox.log 11:46:31 checkbashisms: freeze> python -m pip freeze --all 11:46:32 docs-linkcheck: install_deps> python -I -m pip install -r docs/requirements.txt 11:46:32 docs: install_deps> python -I -m pip install -r docs/requirements.txt 11:46:32 buildcontroller: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 11:46:32 checkbashisms: pip==25.1.1,setuptools==80.3.1 11:46:32 checkbashisms: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./fixCIcentOS8reposMirrors.sh 11:46:32 checkbashisms: commands[1] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sh -c 'command checkbashisms>/dev/null || sudo yum install -y devscripts-checkbashisms || sudo yum install -y devscripts-minimal || sudo yum install -y devscripts || sudo yum install -y https://archives.fedoraproject.org/pub/archive/fedora/linux/releases/31/Everything/x86_64/os/Packages/d/devscripts-checkbashisms-2.19.6-2.fc31.x86_64.rpm || (echo "checkbashisms command not found - please install it (e.g. sudo apt-get install devscripts | yum install devscripts-minimal )" >&2 && exit 1)' 11:46:32 checkbashisms: commands[2] /w/workspace/transportpce-tox-verify-transportpce-master/tests> find . -not -path '*/\.*' -name '*.sh' -exec checkbashisms -f '{}' + 11:46:33 script ./reflectwarn.sh does not appear to have a #! interpreter line; 11:46:33 you may get strange results 11:46:33 checkbashisms: OK ✔ in 3.13 seconds 11:46:33 pre-commit: install_deps> python -I -m pip install pre-commit 11:46:36 pre-commit: freeze> python -m pip freeze --all 11:46:36 pre-commit: cfgv==3.4.0,distlib==0.3.9,filelock==3.18.0,identify==2.6.12,nodeenv==1.9.1,pip==25.1.1,platformdirs==4.3.8,pre_commit==4.2.0,PyYAML==6.0.2,setuptools==80.3.1,virtualenv==20.31.2 11:46:36 pre-commit: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./fixCIcentOS8reposMirrors.sh 11:46:36 pre-commit: commands[1] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sh -c 'which cpan || sudo yum install -y perl-CPAN || (echo "cpan command not found - please install it (e.g. sudo apt-get install perl-modules | yum install perl-CPAN )" >&2 && exit 1)' 11:46:36 /usr/bin/cpan 11:46:36 pre-commit: commands[2] /w/workspace/transportpce-tox-verify-transportpce-master/tests> pre-commit run --all-files --show-diff-on-failure 11:46:36 [WARNING] hook id `remove-tabs` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 11:46:36 [WARNING] hook id `perltidy` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 11:46:36 [INFO] Initializing environment for https://github.com/pre-commit/pre-commit-hooks. 11:46:37 [WARNING] repo `https://github.com/pre-commit/pre-commit-hooks` uses deprecated stage names (commit, push) which will be removed in a future version. Hint: often `pre-commit autoupdate --repo https://github.com/pre-commit/pre-commit-hooks` will fix this. if it does not -- consider reporting an issue to that repo. 11:46:37 [INFO] Initializing environment for https://github.com/jorisroovers/gitlint. 11:46:37 [INFO] Initializing environment for https://github.com/jorisroovers/gitlint:./gitlint-core[trusted-deps]. 11:46:37 [INFO] Initializing environment for https://github.com/Lucas-C/pre-commit-hooks. 11:46:38 [INFO] Initializing environment for https://github.com/pre-commit/mirrors-autopep8. 11:46:38 buildcontroller: freeze> python -m pip freeze --all 11:46:38 [INFO] Initializing environment for https://github.com/perltidy/perltidy. 11:46:38 buildcontroller: bcrypt==4.3.0,certifi==2025.6.15,cffi==1.17.1,charset-normalizer==3.4.2,cryptography==45.0.5,dict2xml==1.7.6,idna==3.10,iniconfig==2.1.0,lxml==5.4.0,netconf-client==3.2.0,packaging==25.0,paramiko==3.5.1,pip==25.1.1,pluggy==1.6.0,psutil==7.0.0,pycparser==2.22,Pygments==2.19.2,PyNaCl==1.5.0,pytest==8.4.1,requests==2.32.4,setuptools==80.3.1,urllib3==2.5.0 11:46:38 buildcontroller: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_controller.sh 11:46:38 + update-java-alternatives -l 11:46:38 java-1.11.0-openjdk-amd64 1111 /usr/lib/jvm/java-1.11.0-openjdk-amd64 11:46:38 java-1.17.0-openjdk-amd64 1711 /usr/lib/jvm/java-1.17.0-openjdk-amd64 11:46:38 + sudo update-java-alternatives -s java-1.21.0-openjdk-amd64 11:46:38 java-1.21.0-openjdk-amd64 2111 /usr/lib/jvm/java-1.21.0-openjdk-amd64 11:46:38 update-alternatives: error: no alternatives for jaotc 11:46:38 update-alternatives: error: no alternatives for rmic 11:46:38 + sed -n ;s/.* version "\(.*\)\.\(.*\)\..*".*$/\1/p; 11:46:38 + java -version 11:46:38 + JAVA_VER=21 11:46:38 + echo 21 11:46:38 21 11:46:38 + javac -version 11:46:38 + sed -n ;s/javac \(.*\)\.\(.*\)\..*.*$/\1/p; 11:46:39 21 11:46:39 ok, java is 21 or newer 11:46:39 + JAVAC_VER=21 11:46:39 + echo 21 11:46:39 + [ 21 -ge 21 ] 11:46:39 + [ 21 -ge 21 ] 11:46:39 + echo ok, java is 21 or newer 11:46:39 + wget -nv https://dlcdn.apache.org/maven/maven-3/3.9.10/binaries/apache-maven-3.9.10-bin.tar.gz -P /tmp 11:46:39 [INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks. 11:46:39 [INFO] Once installed this environment will be reused. 11:46:39 [INFO] This may take a few minutes... 11:46:39 2025-07-04 11:46:39 URL:https://dlcdn.apache.org/maven/maven-3/3.9.10/binaries/apache-maven-3.9.10-bin.tar.gz [8885210/8885210] -> "/tmp/apache-maven-3.9.10-bin.tar.gz" [1] 11:46:39 + sudo mkdir -p /opt 11:46:39 + sudo tar xf /tmp/apache-maven-3.9.10-bin.tar.gz -C /opt 11:46:39 + sudo ln -s /opt/apache-maven-3.9.10 /opt/maven 11:46:39 + sudo ln -s /opt/maven/bin/mvn /usr/bin/mvn 11:46:39 + mvn --version 11:46:40 Apache Maven 3.9.10 (5f519b97e944483d878815739f519b2eade0a91d) 11:46:40 Maven home: /opt/maven 11:46:40 Java version: 21.0.5, vendor: Ubuntu, runtime: /usr/lib/jvm/java-21-openjdk-amd64 11:46:40 Default locale: en, platform encoding: UTF-8 11:46:40 OS name: "linux", version: "5.15.0-131-generic", arch: "amd64", family: "unix" 11:46:40 NOTE: Picked up JDK_JAVA_OPTIONS: 11:46:40 --add-opens=java.base/java.io=ALL-UNNAMED 11:46:40 --add-opens=java.base/java.lang=ALL-UNNAMED 11:46:40 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 11:46:40 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 11:46:40 --add-opens=java.base/java.net=ALL-UNNAMED 11:46:40 --add-opens=java.base/java.nio=ALL-UNNAMED 11:46:40 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 11:46:40 --add-opens=java.base/java.nio.file=ALL-UNNAMED 11:46:40 --add-opens=java.base/java.util=ALL-UNNAMED 11:46:40 --add-opens=java.base/java.util.jar=ALL-UNNAMED 11:46:40 --add-opens=java.base/java.util.stream=ALL-UNNAMED 11:46:40 --add-opens=java.base/java.util.zip=ALL-UNNAMED 11:46:40 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 11:46:40 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 11:46:40 -Xlog:disable 11:46:44 [INFO] Installing environment for https://github.com/Lucas-C/pre-commit-hooks. 11:46:44 [INFO] Once installed this environment will be reused. 11:46:44 [INFO] This may take a few minutes... 11:46:52 [INFO] Installing environment for https://github.com/pre-commit/mirrors-autopep8. 11:46:52 [INFO] Once installed this environment will be reused. 11:46:52 [INFO] This may take a few minutes... 11:46:58 [INFO] Installing environment for https://github.com/perltidy/perltidy. 11:46:58 [INFO] Once installed this environment will be reused. 11:46:58 [INFO] This may take a few minutes... 11:47:01 docs-linkcheck: freeze> python -m pip freeze --all 11:47:01 docs: freeze> python -m pip freeze --all 11:47:01 docs-linkcheck: alabaster==1.0.0,attrs==25.3.0,babel==2.17.0,blockdiag==3.0.0,certifi==2025.6.15,charset-normalizer==3.4.2,contourpy==1.3.2,cycler==0.12.1,docutils==0.21.2,fonttools==4.58.5,funcparserlib==2.0.0a0,future==1.0.0,idna==3.10,imagesize==1.4.1,Jinja2==3.1.6,jsonschema==3.2.0,kiwisolver==1.4.8,lfdocs-conf==0.9.0,MarkupSafe==3.0.2,matplotlib==3.10.3,numpy==2.3.1,nwdiag==3.0.0,packaging==25.0,pillow==11.3.0,pip==25.1.1,Pygments==2.19.2,pyparsing==3.2.3,pyrsistent==0.20.0,python-dateutil==2.9.0.post0,PyYAML==6.0.2,requests==2.32.4,requests-file==1.5.1,roman-numerals-py==3.1.0,seqdiag==3.0.0,setuptools==80.3.1,six==1.17.0,snowballstemmer==3.0.1,Sphinx==8.2.3,sphinx-bootstrap-theme==0.8.1,sphinx-data-viewer==0.1.5,sphinx-rtd-theme==3.0.2,sphinx-tabs==3.4.7,sphinxcontrib-applehelp==2.0.0,sphinxcontrib-blockdiag==3.0.0,sphinxcontrib-devhelp==2.0.0,sphinxcontrib-htmlhelp==2.1.0,sphinxcontrib-jquery==4.1,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-needs==0.7.9,sphinxcontrib-nwdiag==2.0.0,sphinxcontrib-plantuml==0.30,sphinxcontrib-qthelp==2.0.0,sphinxcontrib-seqdiag==3.0.0,sphinxcontrib-serializinghtml==2.0.0,sphinxcontrib-swaggerdoc==0.1.7,urllib3==2.5.0,webcolors==24.11.1 11:47:01 docs-linkcheck: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sphinx-build -q -b linkcheck -d /w/workspace/transportpce-tox-verify-transportpce-master/.tox/docs-linkcheck/tmp/doctrees ../docs/ /w/workspace/transportpce-tox-verify-transportpce-master/docs/_build/linkcheck 11:47:01 docs: alabaster==1.0.0,attrs==25.3.0,babel==2.17.0,blockdiag==3.0.0,certifi==2025.6.15,charset-normalizer==3.4.2,contourpy==1.3.2,cycler==0.12.1,docutils==0.21.2,fonttools==4.58.5,funcparserlib==2.0.0a0,future==1.0.0,idna==3.10,imagesize==1.4.1,Jinja2==3.1.6,jsonschema==3.2.0,kiwisolver==1.4.8,lfdocs-conf==0.9.0,MarkupSafe==3.0.2,matplotlib==3.10.3,numpy==2.3.1,nwdiag==3.0.0,packaging==25.0,pillow==11.3.0,pip==25.1.1,Pygments==2.19.2,pyparsing==3.2.3,pyrsistent==0.20.0,python-dateutil==2.9.0.post0,PyYAML==6.0.2,requests==2.32.4,requests-file==1.5.1,roman-numerals-py==3.1.0,seqdiag==3.0.0,setuptools==80.3.1,six==1.17.0,snowballstemmer==3.0.1,Sphinx==8.2.3,sphinx-bootstrap-theme==0.8.1,sphinx-data-viewer==0.1.5,sphinx-rtd-theme==3.0.2,sphinx-tabs==3.4.7,sphinxcontrib-applehelp==2.0.0,sphinxcontrib-blockdiag==3.0.0,sphinxcontrib-devhelp==2.0.0,sphinxcontrib-htmlhelp==2.1.0,sphinxcontrib-jquery==4.1,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-needs==0.7.9,sphinxcontrib-nwdiag==2.0.0,sphinxcontrib-plantuml==0.30,sphinxcontrib-qthelp==2.0.0,sphinxcontrib-seqdiag==3.0.0,sphinxcontrib-serializinghtml==2.0.0,sphinxcontrib-swaggerdoc==0.1.7,urllib3==2.5.0,webcolors==24.11.1 11:47:01 docs: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sphinx-build -q -W --keep-going -b html -n -d /w/workspace/transportpce-tox-verify-transportpce-master/.tox/docs/tmp/doctrees ../docs/ /w/workspace/transportpce-tox-verify-transportpce-master/docs/_build/html 11:47:04 docs: OK ✔ in 34.01 seconds 11:47:04 pylint: install_deps> python -I -m pip install 'pylint>=2.6.0' 11:47:07 docs-linkcheck: OK ✔ in 34.9 seconds 11:47:07 pylint: freeze> python -m pip freeze --all 11:47:08 pylint: astroid==3.3.10,dill==0.4.0,isort==6.0.1,mccabe==0.7.0,pip==25.1.1,platformdirs==4.3.8,pylint==3.3.7,setuptools==80.3.1,tomlkit==0.13.3 11:47:08 pylint: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> find transportpce_tests/ -name '*.py' -exec pylint --fail-under=10 --max-line-length=120 --disable=missing-docstring,import-error --disable=fixme --disable=duplicate-code '--module-rgx=([a-z0-9_]+$)|([0-9.]{1,30}$)' '--method-rgx=(([a-z_][a-zA-Z0-9_]{2,})|(_[a-z0-9_]*)|(__[a-zA-Z][a-zA-Z0-9_]+__))$' '--variable-rgx=[a-zA-Z_][a-zA-Z0-9_]{1,30}$' '{}' + 11:47:09 trim trailing whitespace.................................................Passed 11:47:09 Tabs remover.............................................................Passed 11:47:09 autopep8.................................................................Passed 11:47:15 perltidy.................................................................Passed 11:47:16 pre-commit: commands[3] /w/workspace/transportpce-tox-verify-transportpce-master/tests> pre-commit run gitlint-ci --hook-stage manual 11:47:16 [WARNING] hook id `remove-tabs` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 11:47:16 [WARNING] hook id `perltidy` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 11:47:16 [INFO] Installing environment for https://github.com/jorisroovers/gitlint. 11:47:16 [INFO] Once installed this environment will be reused. 11:47:16 [INFO] This may take a few minutes... 11:47:23 gitlint..................................................................Failed 11:47:23 - hook id: gitlint-ci 11:47:23 - exit code: 1 11:47:23 11:47:23 1: T1 Title exceeds max length (60>50): "Added available frequencies when occupied frequencies exists" 11:47:23 11:47:23 pre-commit: exit 1 (7.78 seconds) /w/workspace/transportpce-tox-verify-transportpce-master/tests> pre-commit run gitlint-ci --hook-stage manual pid=18005 11:47:31 11:47:31 ------------------------------------ 11:47:31 Your code has been rated at 10.00/10 11:47:31 11:48:25 pre-commit: FAIL ✖ in 50.06 seconds 11:48:25 pylint: OK ✔ in 28.59 seconds 11:48:25 buildcontroller: OK ✔ in 1 minute 53.88 seconds 11:48:25 testsPCE: install_deps> python -I -m pip install gnpy4tpce==2.4.7 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 11:48:25 sims: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 11:48:25 build_karaf_tests221: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 11:48:25 build_karaf_tests121: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 11:48:31 build_karaf_tests221: freeze> python -m pip freeze --all 11:48:31 sims: freeze> python -m pip freeze --all 11:48:31 build_karaf_tests121: freeze> python -m pip freeze --all 11:48:32 build_karaf_tests221: bcrypt==4.3.0,certifi==2025.6.15,cffi==1.17.1,charset-normalizer==3.4.2,cryptography==45.0.5,dict2xml==1.7.6,idna==3.10,iniconfig==2.1.0,lxml==5.4.0,netconf-client==3.2.0,packaging==25.0,paramiko==3.5.1,pip==25.1.1,pluggy==1.6.0,psutil==7.0.0,pycparser==2.22,Pygments==2.19.2,PyNaCl==1.5.0,pytest==8.4.1,requests==2.32.4,setuptools==80.3.1,urllib3==2.5.0 11:48:32 build_karaf_tests221: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 11:48:32 sims: bcrypt==4.3.0,certifi==2025.6.15,cffi==1.17.1,charset-normalizer==3.4.2,cryptography==45.0.5,dict2xml==1.7.6,idna==3.10,iniconfig==2.1.0,lxml==5.4.0,netconf-client==3.2.0,packaging==25.0,paramiko==3.5.1,pip==25.1.1,pluggy==1.6.0,psutil==7.0.0,pycparser==2.22,Pygments==2.19.2,PyNaCl==1.5.0,pytest==8.4.1,requests==2.32.4,setuptools==80.3.1,urllib3==2.5.0 11:48:32 sims: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./install_lightynode.sh 11:48:32 Using lighynode version 20.1.0.5 11:48:32 Installing lightynode device to ./lightynode/lightynode-openroadm-device directory 11:48:32 build_karaf_tests121: bcrypt==4.3.0,certifi==2025.6.15,cffi==1.17.1,charset-normalizer==3.4.2,cryptography==45.0.5,dict2xml==1.7.6,idna==3.10,iniconfig==2.1.0,lxml==5.4.0,netconf-client==3.2.0,packaging==25.0,paramiko==3.5.1,pip==25.1.1,pluggy==1.6.0,psutil==7.0.0,pycparser==2.22,Pygments==2.19.2,PyNaCl==1.5.0,pytest==8.4.1,requests==2.32.4,setuptools==80.3.1,urllib3==2.5.0 11:48:32 build_karaf_tests121: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 11:48:32 NOTE: Picked up JDK_JAVA_OPTIONS: 11:48:32 --add-opens=java.base/java.io=ALL-UNNAMED 11:48:32 --add-opens=java.base/java.lang=ALL-UNNAMED 11:48:32 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 11:48:32 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 11:48:32 --add-opens=java.base/java.net=ALL-UNNAMED 11:48:32 --add-opens=java.base/java.nio=ALL-UNNAMED 11:48:32 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 11:48:32 --add-opens=java.base/java.nio.file=ALL-UNNAMED 11:48:32 --add-opens=java.base/java.util=ALL-UNNAMED 11:48:32 --add-opens=java.base/java.util.jar=ALL-UNNAMED 11:48:32 --add-opens=java.base/java.util.stream=ALL-UNNAMED 11:48:32 --add-opens=java.base/java.util.zip=ALL-UNNAMED 11:48:32 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 11:48:32 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 11:48:32 -Xlog:disable 11:48:32 NOTE: Picked up JDK_JAVA_OPTIONS: 11:48:32 --add-opens=java.base/java.io=ALL-UNNAMED 11:48:32 --add-opens=java.base/java.lang=ALL-UNNAMED 11:48:32 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 11:48:32 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 11:48:32 --add-opens=java.base/java.net=ALL-UNNAMED 11:48:32 --add-opens=java.base/java.nio=ALL-UNNAMED 11:48:32 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 11:48:32 --add-opens=java.base/java.nio.file=ALL-UNNAMED 11:48:32 --add-opens=java.base/java.util=ALL-UNNAMED 11:48:32 --add-opens=java.base/java.util.jar=ALL-UNNAMED 11:48:32 --add-opens=java.base/java.util.stream=ALL-UNNAMED 11:48:32 --add-opens=java.base/java.util.zip=ALL-UNNAMED 11:48:32 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 11:48:32 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 11:48:32 -Xlog:disable 11:48:36 sims: OK ✔ in 12.09 seconds 11:48:36 build_karaf_tests71: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 11:48:49 build_karaf_tests71: freeze> python -m pip freeze --all 11:48:49 build_karaf_tests71: bcrypt==4.3.0,certifi==2025.6.15,cffi==1.17.1,charset-normalizer==3.4.2,cryptography==45.0.5,dict2xml==1.7.6,idna==3.10,iniconfig==2.1.0,lxml==5.4.0,netconf-client==3.2.0,packaging==25.0,paramiko==3.5.1,pip==25.1.1,pluggy==1.6.0,psutil==7.0.0,pycparser==2.22,Pygments==2.19.2,PyNaCl==1.5.0,pytest==8.4.1,requests==2.32.4,setuptools==80.3.1,urllib3==2.5.0 11:48:49 build_karaf_tests71: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 11:48:50 NOTE: Picked up JDK_JAVA_OPTIONS: 11:48:50 --add-opens=java.base/java.io=ALL-UNNAMED 11:48:50 --add-opens=java.base/java.lang=ALL-UNNAMED 11:48:50 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 11:48:50 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 11:48:50 --add-opens=java.base/java.net=ALL-UNNAMED 11:48:50 --add-opens=java.base/java.nio=ALL-UNNAMED 11:48:50 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 11:48:50 --add-opens=java.base/java.nio.file=ALL-UNNAMED 11:48:50 --add-opens=java.base/java.util=ALL-UNNAMED 11:48:50 --add-opens=java.base/java.util.jar=ALL-UNNAMED 11:48:50 --add-opens=java.base/java.util.stream=ALL-UNNAMED 11:48:50 --add-opens=java.base/java.util.zip=ALL-UNNAMED 11:48:50 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 11:48:50 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 11:48:50 -Xlog:disable 11:49:15 build_karaf_tests121: OK ✔ in 50.35 seconds 11:49:15 build_karaf_tests190: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 11:49:15 build_karaf_tests221: OK ✔ in 51.3 seconds 11:49:15 build_karaf_tests_hybrid: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 11:49:21 build_karaf_tests190: freeze> python -m pip freeze --all 11:49:22 build_karaf_tests190: bcrypt==4.3.0,certifi==2025.6.15,cffi==1.17.1,charset-normalizer==3.4.2,cryptography==45.0.5,dict2xml==1.7.6,idna==3.10,iniconfig==2.1.0,lxml==5.4.0,netconf-client==3.2.0,packaging==25.0,paramiko==3.5.1,pip==25.1.1,pluggy==1.6.0,psutil==7.0.0,pycparser==2.22,Pygments==2.19.2,PyNaCl==1.5.0,pytest==8.4.1,requests==2.32.4,setuptools==80.3.1,urllib3==2.5.0 11:49:22 build_karaf_tests190: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 11:49:22 NOTE: Picked up JDK_JAVA_OPTIONS: 11:49:22 --add-opens=java.base/java.io=ALL-UNNAMED 11:49:22 --add-opens=java.base/java.lang=ALL-UNNAMED 11:49:22 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 11:49:22 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 11:49:22 --add-opens=java.base/java.net=ALL-UNNAMED 11:49:22 --add-opens=java.base/java.nio=ALL-UNNAMED 11:49:22 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 11:49:22 --add-opens=java.base/java.nio.file=ALL-UNNAMED 11:49:22 --add-opens=java.base/java.util=ALL-UNNAMED 11:49:22 --add-opens=java.base/java.util.jar=ALL-UNNAMED 11:49:22 --add-opens=java.base/java.util.stream=ALL-UNNAMED 11:49:22 --add-opens=java.base/java.util.zip=ALL-UNNAMED 11:49:22 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 11:49:22 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 11:49:22 -Xlog:disable 11:49:22 build_karaf_tests_hybrid: freeze> python -m pip freeze --all 11:49:22 build_karaf_tests_hybrid: bcrypt==4.3.0,certifi==2025.6.15,cffi==1.17.1,charset-normalizer==3.4.2,cryptography==45.0.5,dict2xml==1.7.6,idna==3.10,iniconfig==2.1.0,lxml==5.4.0,netconf-client==3.2.0,packaging==25.0,paramiko==3.5.1,pip==25.1.1,pluggy==1.6.0,psutil==7.0.0,pycparser==2.22,Pygments==2.19.2,PyNaCl==1.5.0,pytest==8.4.1,requests==2.32.4,setuptools==80.3.1,urllib3==2.5.0 11:49:22 build_karaf_tests_hybrid: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 11:49:22 NOTE: Picked up JDK_JAVA_OPTIONS: 11:49:22 --add-opens=java.base/java.io=ALL-UNNAMED 11:49:22 --add-opens=java.base/java.lang=ALL-UNNAMED 11:49:22 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 11:49:22 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 11:49:22 --add-opens=java.base/java.net=ALL-UNNAMED 11:49:22 --add-opens=java.base/java.nio=ALL-UNNAMED 11:49:22 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 11:49:22 --add-opens=java.base/java.nio.file=ALL-UNNAMED 11:49:22 --add-opens=java.base/java.util=ALL-UNNAMED 11:49:22 --add-opens=java.base/java.util.jar=ALL-UNNAMED 11:49:22 --add-opens=java.base/java.util.stream=ALL-UNNAMED 11:49:22 --add-opens=java.base/java.util.zip=ALL-UNNAMED 11:49:22 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 11:49:22 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 11:49:22 -Xlog:disable 11:49:23 testsPCE: freeze> python -m pip freeze --all 11:49:23 testsPCE: bcrypt==4.3.0,certifi==2025.6.15,cffi==1.17.1,charset-normalizer==3.4.2,click==8.2.1,contourpy==1.3.2,cryptography==3.3.2,cycler==0.12.1,dict2xml==1.7.6,Flask==2.1.3,Flask-Injector==0.14.0,fonttools==4.58.5,gnpy4tpce==2.4.7,idna==3.10,iniconfig==2.1.0,injector==0.22.0,itsdangerous==2.2.0,Jinja2==3.1.6,kiwisolver==1.4.8,lxml==5.4.0,MarkupSafe==3.0.2,matplotlib==3.10.3,netconf-client==3.2.0,networkx==2.8.8,numpy==1.26.4,packaging==25.0,pandas==1.5.3,paramiko==3.5.1,pbr==5.11.1,pillow==11.3.0,pip==25.1.1,pluggy==1.6.0,psutil==7.0.0,pycparser==2.22,Pygments==2.19.2,PyNaCl==1.5.0,pyparsing==3.2.3,pytest==8.4.1,python-dateutil==2.9.0.post0,pytz==2025.2,requests==2.32.4,scipy==1.16.0,setuptools==50.3.2,six==1.17.0,urllib3==2.5.0,Werkzeug==2.0.3,xlrd==1.2.0 11:49:23 testsPCE: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh pce 11:49:23 pytest -q transportpce_tests/pce/test01_pce.py 11:50:01 build_karaf_tests71: OK ✔ in 50.37 seconds 11:50:01 build_karaf_tests190: OK ✔ in 46.89 seconds 11:50:01 tests190: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 11:50:09 build_karaf_tests_hybrid: OK ✔ in 47.6 seconds 11:50:09 tests190: freeze> python -m pip freeze --all 11:50:10 tests190: bcrypt==4.3.0,certifi==2025.6.15,cffi==1.17.1,charset-normalizer==3.4.2,cryptography==45.0.5,dict2xml==1.7.6,idna==3.10,iniconfig==2.1.0,lxml==5.4.0,netconf-client==3.2.0,packaging==25.0,paramiko==3.5.1,pip==25.1.1,pluggy==1.6.0,psutil==7.0.0,pycparser==2.22,Pygments==2.19.2,PyNaCl==1.5.0,pytest==8.4.1,requests==2.32.4,setuptools==80.3.1,urllib3==2.5.0 11:50:10 tests190: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh oc 11:50:10 using environment variables from ./karafoc.env 11:50:10 pytest -q transportpce_tests/oc/test01_portmapping.py 11:50:18 ........................ [100%] 11:50:54 10 passed in 43.74s 11:50:54 pytest -q transportpce_tests/oc/test02_topology.py 11:50:56 ........ [100%] 11:51:22 20 passed in 118.67s (0:01:58) 11:51:22 pytest -q transportpce_tests/pce/test02_pce_400G.py 11:51:28 ................. [100%] 11:51:52 14 passed in 57.98s 11:51:52 pytest -q transportpce_tests/oc/test03_renderer.py 11:51:53 ....... [100%] 11:52:11 12 passed in 48.13s 11:52:11 pytest -q transportpce_tests/pce/test03_gnpy.py 11:52:16 ........................ [100%] 11:52:43 19 passed in 51.00s 11:52:45 ... [100%] 11:52:53 8 passed in 41.82s 11:52:53 pytest -q transportpce_tests/pce/test04_pce_bug_fix.py 11:53:25 ... [100%] 11:53:30 3 passed in 36.55s 11:53:30 tests190: OK ✔ in 2 minutes 42.14 seconds 11:53:30 testsPCE: OK ✔ in 5 minutes 5.75 seconds 11:53:30 tests_tapi: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 11:53:36 tests_tapi: freeze> python -m pip freeze --all 11:53:36 tests_tapi: bcrypt==4.3.0,certifi==2025.6.15,cffi==1.17.1,charset-normalizer==3.4.2,cryptography==45.0.5,dict2xml==1.7.6,idna==3.10,iniconfig==2.1.0,lxml==5.4.0,netconf-client==3.2.0,packaging==25.0,paramiko==3.5.1,pip==25.1.1,pluggy==1.6.0,psutil==7.0.0,pycparser==2.22,Pygments==2.19.2,PyNaCl==1.5.0,pytest==8.4.1,requests==2.32.4,setuptools==80.3.1,urllib3==2.5.0 11:53:36 tests_tapi: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh tapi 11:53:36 using environment variables from ./karaf221.env 11:53:36 pytest -q transportpce_tests/tapi/test01_abstracted_topology.py 11:54:42 FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF. [100%] 11:55:58 =================================== FAILURES =================================== 11:55:58 _____________ TransportTapitesting.test_01_get_tapi_topology_T100G _____________ 11:55:58 11:55:58 self = 11:55:58 11:55:58 def _new_conn(self) -> socket.socket: 11:55:58 """Establish a socket connection and set nodelay settings on it. 11:55:58 11:55:58 :return: New socket connection. 11:55:58 """ 11:55:58 try: 11:55:58 > sock = connection.create_connection( 11:55:58 (self._dns_host, self.port), 11:55:58 self.timeout, 11:55:58 source_address=self.source_address, 11:55:58 socket_options=self.socket_options, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:58 raise err 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:58 socket_options = [(6, 1, 1)] 11:55:58 11:55:58 def create_connection( 11:55:58 address: tuple[str, int], 11:55:58 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:58 source_address: tuple[str, int] | None = None, 11:55:58 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:58 ) -> socket.socket: 11:55:58 """Connect to *address* and return the socket object. 11:55:58 11:55:58 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:58 port)``) and return the socket object. Passing the optional 11:55:58 *timeout* parameter will set the timeout on the socket instance 11:55:58 before attempting to connect. If no *timeout* is supplied, the 11:55:58 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:58 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:58 for the socket to bind as a source address before making the connection. 11:55:58 An host of '' or port 0 tells the OS to use the default. 11:55:58 """ 11:55:58 11:55:58 host, port = address 11:55:58 if host.startswith("["): 11:55:58 host = host.strip("[]") 11:55:58 err = None 11:55:58 11:55:58 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:58 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:58 # The original create_connection function always returns all records. 11:55:58 family = allowed_gai_family() 11:55:58 11:55:58 try: 11:55:58 host.encode("idna") 11:55:58 except UnicodeError: 11:55:58 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:58 11:55:58 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:58 af, socktype, proto, canonname, sa = res 11:55:58 sock = None 11:55:58 try: 11:55:58 sock = socket.socket(af, socktype, proto) 11:55:58 11:55:58 # If provided, set socket level options before connecting. 11:55:58 _set_socket_options(sock, socket_options) 11:55:58 11:55:58 if timeout is not _DEFAULT_TIMEOUT: 11:55:58 sock.settimeout(timeout) 11:55:58 if source_address: 11:55:58 sock.bind(source_address) 11:55:58 > sock.connect(sa) 11:55:58 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:58 11:55:58 The above exception was the direct cause of the following exception: 11:55:58 11:55:58 self = 11:55:58 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:58 body = '{"input": {"topology-id": "cf51c729-3699-308a-a7d0-594c6a62ebbb"}}' 11:55:58 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '66', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:58 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:58 redirect = False, assert_same_host = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:58 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:58 decode_content = False, response_kw = {} 11:55:58 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/tapi-topology:get-topology-details', query=None, fragment=None) 11:55:58 destination_scheme = None, conn = None, release_this_conn = True 11:55:58 http_tunnel_required = False, err = None, clean_exit = False 11:55:58 11:55:58 def urlopen( # type: ignore[override] 11:55:58 self, 11:55:58 method: str, 11:55:58 url: str, 11:55:58 body: _TYPE_BODY | None = None, 11:55:58 headers: typing.Mapping[str, str] | None = None, 11:55:58 retries: Retry | bool | int | None = None, 11:55:58 redirect: bool = True, 11:55:58 assert_same_host: bool = True, 11:55:58 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:58 pool_timeout: int | None = None, 11:55:58 release_conn: bool | None = None, 11:55:58 chunked: bool = False, 11:55:58 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:58 preload_content: bool = True, 11:55:58 decode_content: bool = True, 11:55:58 **response_kw: typing.Any, 11:55:58 ) -> BaseHTTPResponse: 11:55:58 """ 11:55:58 Get a connection from the pool and perform an HTTP request. This is the 11:55:58 lowest level call for making a request, so you'll need to specify all 11:55:58 the raw details. 11:55:58 11:55:58 .. note:: 11:55:58 11:55:58 More commonly, it's appropriate to use a convenience method 11:55:58 such as :meth:`request`. 11:55:58 11:55:58 .. note:: 11:55:58 11:55:58 `release_conn` will only behave as expected if 11:55:58 `preload_content=False` because we want to make 11:55:58 `preload_content=False` the default behaviour someday soon without 11:55:58 breaking backwards compatibility. 11:55:58 11:55:58 :param method: 11:55:58 HTTP request method (such as GET, POST, PUT, etc.) 11:55:58 11:55:58 :param url: 11:55:58 The URL to perform the request on. 11:55:58 11:55:58 :param body: 11:55:58 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:58 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:58 11:55:58 :param headers: 11:55:58 Dictionary of custom headers to send, such as User-Agent, 11:55:58 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:58 these headers completely replace any pool-specific headers. 11:55:58 11:55:58 :param retries: 11:55:58 Configure the number of retries to allow before raising a 11:55:58 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:58 11:55:58 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:58 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:58 over different types of retries. 11:55:58 Pass an integer number to retry connection errors that many times, 11:55:58 but no other types of errors. Pass zero to never retry. 11:55:58 11:55:58 If ``False``, then retries are disabled and any exception is raised 11:55:58 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:58 the redirect response will be returned. 11:55:58 11:55:58 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:58 11:55:58 :param redirect: 11:55:58 If True, automatically handle redirects (status codes 301, 302, 11:55:58 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:58 will disable redirect, too. 11:55:58 11:55:58 :param assert_same_host: 11:55:58 If ``True``, will make sure that the host of the pool requests is 11:55:58 consistent else will raise HostChangedError. When ``False``, you can 11:55:58 use the pool on an HTTP proxy and request foreign hosts. 11:55:58 11:55:58 :param timeout: 11:55:58 If specified, overrides the default timeout for this one 11:55:58 request. It may be a float (in seconds) or an instance of 11:55:58 :class:`urllib3.util.Timeout`. 11:55:58 11:55:58 :param pool_timeout: 11:55:58 If set and the pool is set to block=True, then this method will 11:55:58 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:58 connection is available within the time period. 11:55:58 11:55:58 :param bool preload_content: 11:55:58 If True, the response's body will be preloaded into memory. 11:55:58 11:55:58 :param bool decode_content: 11:55:58 If True, will attempt to decode the body based on the 11:55:58 'content-encoding' header. 11:55:58 11:55:58 :param release_conn: 11:55:58 If False, then the urlopen call will not release the connection 11:55:58 back into the pool once a response is received (but will release if 11:55:58 you read the entire contents of the response such as when 11:55:58 `preload_content=True`). This is useful if you're not preloading 11:55:58 the response's content immediately. You will need to call 11:55:58 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:58 back into the pool. If None, it takes the value of ``preload_content`` 11:55:58 which defaults to ``True``. 11:55:58 11:55:58 :param bool chunked: 11:55:58 If True, urllib3 will send the body using chunked transfer 11:55:58 encoding. Otherwise, urllib3 will send the body using the standard 11:55:58 content-length form. Defaults to False. 11:55:58 11:55:58 :param int body_pos: 11:55:58 Position to seek to in file-like body in the event of a retry or 11:55:58 redirect. Typically this won't need to be set because urllib3 will 11:55:58 auto-populate the value when needed. 11:55:58 """ 11:55:58 parsed_url = parse_url(url) 11:55:58 destination_scheme = parsed_url.scheme 11:55:58 11:55:58 if headers is None: 11:55:58 headers = self.headers 11:55:58 11:55:58 if not isinstance(retries, Retry): 11:55:58 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:58 11:55:58 if release_conn is None: 11:55:58 release_conn = preload_content 11:55:58 11:55:58 # Check host 11:55:58 if assert_same_host and not self.is_same_host(url): 11:55:58 raise HostChangedError(self, url, retries) 11:55:58 11:55:58 # Ensure that the URL we're connecting to is properly encoded 11:55:58 if url.startswith("/"): 11:55:58 url = to_str(_encode_target(url)) 11:55:58 else: 11:55:58 url = to_str(parsed_url.url) 11:55:58 11:55:58 conn = None 11:55:58 11:55:58 # Track whether `conn` needs to be released before 11:55:58 # returning/raising/recursing. Update this variable if necessary, and 11:55:58 # leave `release_conn` constant throughout the function. That way, if 11:55:58 # the function recurses, the original value of `release_conn` will be 11:55:58 # passed down into the recursive call, and its value will be respected. 11:55:58 # 11:55:58 # See issue #651 [1] for details. 11:55:58 # 11:55:58 # [1] 11:55:58 release_this_conn = release_conn 11:55:58 11:55:58 http_tunnel_required = connection_requires_http_tunnel( 11:55:58 self.proxy, self.proxy_config, destination_scheme 11:55:58 ) 11:55:58 11:55:58 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:58 # have to copy the headers dict so we can safely change it without those 11:55:58 # changes being reflected in anyone else's copy. 11:55:58 if not http_tunnel_required: 11:55:58 headers = headers.copy() # type: ignore[attr-defined] 11:55:58 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:58 11:55:58 # Must keep the exception bound to a separate variable or else Python 3 11:55:58 # complains about UnboundLocalError. 11:55:58 err = None 11:55:58 11:55:58 # Keep track of whether we cleanly exited the except block. This 11:55:58 # ensures we do proper cleanup in finally. 11:55:58 clean_exit = False 11:55:58 11:55:58 # Rewind body position, if needed. Record current position 11:55:58 # for future rewinds in the event of a redirect/retry. 11:55:58 body_pos = set_file_position(body, body_pos) 11:55:58 11:55:58 try: 11:55:58 # Request a connection from the queue. 11:55:58 timeout_obj = self._get_timeout(timeout) 11:55:58 conn = self._get_conn(timeout=pool_timeout) 11:55:58 11:55:58 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:58 11:55:58 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:58 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:58 try: 11:55:58 self._prepare_proxy(conn) 11:55:58 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:58 self._raise_timeout( 11:55:58 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:58 ) 11:55:58 raise 11:55:58 11:55:58 # If we're going to release the connection in ``finally:``, then 11:55:58 # the response doesn't need to know about the connection. Otherwise 11:55:58 # it will also try to release it and we'll have a double-release 11:55:58 # mess. 11:55:58 response_conn = conn if not release_conn else None 11:55:58 11:55:58 # Make the request on the HTTPConnection object 11:55:58 > response = self._make_request( 11:55:58 conn, 11:55:58 method, 11:55:58 url, 11:55:58 timeout=timeout_obj, 11:55:58 body=body, 11:55:58 headers=headers, 11:55:58 chunked=chunked, 11:55:58 retries=retries, 11:55:58 response_conn=response_conn, 11:55:58 preload_content=preload_content, 11:55:58 decode_content=decode_content, 11:55:58 **response_kw, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:58 conn.request( 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:58 self.endheaders() 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:58 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:58 self.send(msg) 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:58 self.connect() 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:58 self.sock = self._new_conn() 11:55:58 ^^^^^^^^^^^^^^^^ 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = 11:55:58 11:55:58 def _new_conn(self) -> socket.socket: 11:55:58 """Establish a socket connection and set nodelay settings on it. 11:55:58 11:55:58 :return: New socket connection. 11:55:58 """ 11:55:58 try: 11:55:58 sock = connection.create_connection( 11:55:58 (self._dns_host, self.port), 11:55:58 self.timeout, 11:55:58 source_address=self.source_address, 11:55:58 socket_options=self.socket_options, 11:55:58 ) 11:55:58 except socket.gaierror as e: 11:55:58 raise NameResolutionError(self.host, self, e) from e 11:55:58 except SocketTimeout as e: 11:55:58 raise ConnectTimeoutError( 11:55:58 self, 11:55:58 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:58 ) from e 11:55:58 11:55:58 except OSError as e: 11:55:58 > raise NewConnectionError( 11:55:58 self, f"Failed to establish a new connection: {e}" 11:55:58 ) from e 11:55:58 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:58 11:55:58 The above exception was the direct cause of the following exception: 11:55:58 11:55:58 self = 11:55:58 request = , stream = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:58 proxies = OrderedDict() 11:55:58 11:55:58 def send( 11:55:58 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:58 ): 11:55:58 """Sends PreparedRequest object. Returns Response object. 11:55:58 11:55:58 :param request: The :class:`PreparedRequest ` being sent. 11:55:58 :param stream: (optional) Whether to stream the request content. 11:55:58 :param timeout: (optional) How long to wait for the server to send 11:55:58 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:58 read timeout) ` tuple. 11:55:58 :type timeout: float or tuple or urllib3 Timeout object 11:55:58 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:58 we verify the server's TLS certificate, or a string, in which case it 11:55:58 must be a path to a CA bundle to use 11:55:58 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:58 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:58 :rtype: requests.Response 11:55:58 """ 11:55:58 11:55:58 try: 11:55:58 conn = self.get_connection_with_tls_context( 11:55:58 request, verify, proxies=proxies, cert=cert 11:55:58 ) 11:55:58 except LocationValueError as e: 11:55:58 raise InvalidURL(e, request=request) 11:55:58 11:55:58 self.cert_verify(conn, request.url, verify, cert) 11:55:58 url = self.request_url(request, proxies) 11:55:58 self.add_headers( 11:55:58 request, 11:55:58 stream=stream, 11:55:58 timeout=timeout, 11:55:58 verify=verify, 11:55:58 cert=cert, 11:55:58 proxies=proxies, 11:55:58 ) 11:55:58 11:55:58 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:58 11:55:58 if isinstance(timeout, tuple): 11:55:58 try: 11:55:58 connect, read = timeout 11:55:58 timeout = TimeoutSauce(connect=connect, read=read) 11:55:58 except ValueError: 11:55:58 raise ValueError( 11:55:58 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:58 f"or a single float to set both timeouts to the same value." 11:55:58 ) 11:55:58 elif isinstance(timeout, TimeoutSauce): 11:55:58 pass 11:55:58 else: 11:55:58 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:58 11:55:58 try: 11:55:58 > resp = conn.urlopen( 11:55:58 method=request.method, 11:55:58 url=url, 11:55:58 body=request.body, 11:55:58 headers=request.headers, 11:55:58 redirect=False, 11:55:58 assert_same_host=False, 11:55:58 preload_content=False, 11:55:58 decode_content=False, 11:55:58 retries=self.max_retries, 11:55:58 timeout=timeout, 11:55:58 chunked=chunked, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:58 retries = retries.increment( 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:58 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:58 response = None 11:55:58 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:58 _pool = 11:55:58 _stacktrace = 11:55:58 11:55:58 def increment( 11:55:58 self, 11:55:58 method: str | None = None, 11:55:58 url: str | None = None, 11:55:58 response: BaseHTTPResponse | None = None, 11:55:58 error: Exception | None = None, 11:55:58 _pool: ConnectionPool | None = None, 11:55:58 _stacktrace: TracebackType | None = None, 11:55:58 ) -> Self: 11:55:58 """Return a new Retry object with incremented retry counters. 11:55:58 11:55:58 :param response: A response object, or None, if the server did not 11:55:58 return a response. 11:55:58 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:58 :param Exception error: An error encountered during the request, or 11:55:58 None if the response was received successfully. 11:55:58 11:55:58 :return: A new ``Retry`` object. 11:55:58 """ 11:55:58 if self.total is False and error: 11:55:58 # Disabled, indicate to re-raise the error. 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 11:55:58 total = self.total 11:55:58 if total is not None: 11:55:58 total -= 1 11:55:58 11:55:58 connect = self.connect 11:55:58 read = self.read 11:55:58 redirect = self.redirect 11:55:58 status_count = self.status 11:55:58 other = self.other 11:55:58 cause = "unknown" 11:55:58 status = None 11:55:58 redirect_location = None 11:55:58 11:55:58 if error and self._is_connection_error(error): 11:55:58 # Connect retry? 11:55:58 if connect is False: 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 elif connect is not None: 11:55:58 connect -= 1 11:55:58 11:55:58 elif error and self._is_read_error(error): 11:55:58 # Read retry? 11:55:58 if read is False or method is None or not self._is_method_retryable(method): 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 elif read is not None: 11:55:58 read -= 1 11:55:58 11:55:58 elif error: 11:55:58 # Other retry? 11:55:58 if other is not None: 11:55:58 other -= 1 11:55:58 11:55:58 elif response and response.get_redirect_location(): 11:55:58 # Redirect retry? 11:55:58 if redirect is not None: 11:55:58 redirect -= 1 11:55:58 cause = "too many redirects" 11:55:58 response_redirect_location = response.get_redirect_location() 11:55:58 if response_redirect_location: 11:55:58 redirect_location = response_redirect_location 11:55:58 status = response.status 11:55:58 11:55:58 else: 11:55:58 # Incrementing because of a server error like a 500 in 11:55:58 # status_forcelist and the given method is in the allowed_methods 11:55:58 cause = ResponseError.GENERIC_ERROR 11:55:58 if response and response.status: 11:55:58 if status_count is not None: 11:55:58 status_count -= 1 11:55:58 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:58 status = response.status 11:55:58 11:55:58 history = self.history + ( 11:55:58 RequestHistory(method, url, error, status, redirect_location), 11:55:58 ) 11:55:58 11:55:58 new_retry = self.new( 11:55:58 total=total, 11:55:58 connect=connect, 11:55:58 read=read, 11:55:58 redirect=redirect, 11:55:58 status=status_count, 11:55:58 other=other, 11:55:58 history=history, 11:55:58 ) 11:55:58 11:55:58 if new_retry.is_exhausted(): 11:55:58 reason = error or ResponseError(cause) 11:55:58 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:58 11:55:58 During handling of the above exception, another exception occurred: 11:55:58 11:55:58 self = 11:55:58 11:55:58 def test_01_get_tapi_topology_T100G(self): 11:55:58 self.tapi_topo["topology-id"] = test_utils.T100GE_UUID 11:55:58 > response = test_utils.transportpce_api_rpc_request( 11:55:58 'tapi-topology', 'get-topology-details', self.tapi_topo) 11:55:58 11:55:58 transportpce_tests/tapi/test01_abstracted_topology.py:182: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:58 response = post_request(url, data) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 transportpce_tests/common/test_utils.py:142: in post_request 11:55:58 return requests.request( 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:58 return session.request(method=method, url=url, **kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:58 resp = self.send(prep, **send_kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:58 r = adapter.send(request, **kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = 11:55:58 request = , stream = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:58 proxies = OrderedDict() 11:55:58 11:55:58 def send( 11:55:58 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:58 ): 11:55:58 """Sends PreparedRequest object. Returns Response object. 11:55:58 11:55:58 :param request: The :class:`PreparedRequest ` being sent. 11:55:58 :param stream: (optional) Whether to stream the request content. 11:55:58 :param timeout: (optional) How long to wait for the server to send 11:55:58 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:58 read timeout) ` tuple. 11:55:58 :type timeout: float or tuple or urllib3 Timeout object 11:55:58 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:58 we verify the server's TLS certificate, or a string, in which case it 11:55:58 must be a path to a CA bundle to use 11:55:58 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:58 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:58 :rtype: requests.Response 11:55:58 """ 11:55:58 11:55:58 try: 11:55:58 conn = self.get_connection_with_tls_context( 11:55:58 request, verify, proxies=proxies, cert=cert 11:55:58 ) 11:55:58 except LocationValueError as e: 11:55:58 raise InvalidURL(e, request=request) 11:55:58 11:55:58 self.cert_verify(conn, request.url, verify, cert) 11:55:58 url = self.request_url(request, proxies) 11:55:58 self.add_headers( 11:55:58 request, 11:55:58 stream=stream, 11:55:58 timeout=timeout, 11:55:58 verify=verify, 11:55:58 cert=cert, 11:55:58 proxies=proxies, 11:55:58 ) 11:55:58 11:55:58 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:58 11:55:58 if isinstance(timeout, tuple): 11:55:58 try: 11:55:58 connect, read = timeout 11:55:58 timeout = TimeoutSauce(connect=connect, read=read) 11:55:58 except ValueError: 11:55:58 raise ValueError( 11:55:58 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:58 f"or a single float to set both timeouts to the same value." 11:55:58 ) 11:55:58 elif isinstance(timeout, TimeoutSauce): 11:55:58 pass 11:55:58 else: 11:55:58 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:58 11:55:58 try: 11:55:58 resp = conn.urlopen( 11:55:58 method=request.method, 11:55:58 url=url, 11:55:58 body=request.body, 11:55:58 headers=request.headers, 11:55:58 redirect=False, 11:55:58 assert_same_host=False, 11:55:58 preload_content=False, 11:55:58 decode_content=False, 11:55:58 retries=self.max_retries, 11:55:58 timeout=timeout, 11:55:58 chunked=chunked, 11:55:58 ) 11:55:58 11:55:58 except (ProtocolError, OSError) as err: 11:55:58 raise ConnectionError(err, request=request) 11:55:58 11:55:58 except MaxRetryError as e: 11:55:58 if isinstance(e.reason, ConnectTimeoutError): 11:55:58 # TODO: Remove this in 3.0.0: see #2811 11:55:58 if not isinstance(e.reason, NewConnectionError): 11:55:58 raise ConnectTimeout(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, ResponseError): 11:55:58 raise RetryError(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, _ProxyError): 11:55:58 raise ProxyError(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, _SSLError): 11:55:58 # This branch is for urllib3 v1.22 and later. 11:55:58 raise SSLError(e, request=request) 11:55:58 11:55:58 > raise ConnectionError(e, request=request) 11:55:58 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:58 ---------------------------- Captured stdout setup ----------------------------- 11:55:58 starting OpenDaylight... 11:55:58 starting KARAF TransportPCE build... 11:55:58 Searching for patterns in karaf.log... Pattern found! OpenDaylight started ! 11:55:58 installing tapi feature... 11:55:58 installing feature odl-transportpce-tapi 11:55:58 client: JAVA_HOME not set; results may vary 11:55:58 odl-transportpce-tapi │ 11.0.0.SNAPSHOT │ x │ Started │ odl-transportpce-tapi │ OpenDaylight :: transportpce :: tapi 11:55:58 starting simulator xpdra in OpenROADM device version 2.2.1... 11:55:58 Searching for patterns in xpdra-221.log... Pattern found! simulator for xpdra started 11:55:58 starting simulator roadma in OpenROADM device version 2.2.1... 11:55:58 Searching for patterns in roadma-221.log... Pattern found! simulator for roadma started 11:55:58 starting simulator roadmb in OpenROADM device version 2.2.1... 11:55:58 Searching for patterns in roadmb-221.log... Pattern found! simulator for roadmb started 11:55:58 starting simulator roadmc in OpenROADM device version 2.2.1... 11:55:58 Searching for patterns in roadmc-221.log... Pattern found! simulator for roadmc started 11:55:58 starting simulator xpdrc in OpenROADM device version 2.2.1... 11:55:58 Searching for patterns in xpdrc-221.log... Pattern found! simulator for xpdrc started 11:55:58 starting simulator spdra in OpenROADM device version 2.2.1... 11:55:58 Searching for patterns in spdra-221.log... Pattern found! simulator for spdra started 11:55:58 starting simulator spdrc in OpenROADM device version 2.2.1... 11:55:58 Searching for patterns in spdrc-221.log... Pattern found! simulator for spdrc started 11:55:58 ---------------------------- Captured stderr setup ----------------------------- 11:55:58 SLF4J(W): No SLF4J providers were found. 11:55:58 SLF4J(W): Defaulting to no-operation (NOP) logger implementation 11:55:58 SLF4J(W): See https://www.slf4j.org/codes.html#noProviders for further details. 11:55:58 SLF4J(W): Class path contains SLF4J bindings targeting slf4j-api versions 1.7.x or earlier. 11:55:58 SLF4J(W): Ignoring binding found at [jar:file:/w/workspace/transportpce-tox-verify-transportpce-master/karaf221/target/assembly/system/org/apache/karaf/org.apache.karaf.client/4.4.6/org.apache.karaf.client-4.4.6.jar!/org/slf4j/impl/StaticLoggerBinder.class] 11:55:58 SLF4J(W): See https://www.slf4j.org/codes.html#ignoredBindings for an explanation. 11:55:58 ----------------------------- Captured stdout call ----------------------------- 11:55:58 execution of test_01_get_tapi_topology_T100G 11:55:58 ______________ TransportTapitesting.test_02_get_tapi_topology_T0 _______________ 11:55:58 11:55:58 self = 11:55:58 11:55:58 def _new_conn(self) -> socket.socket: 11:55:58 """Establish a socket connection and set nodelay settings on it. 11:55:58 11:55:58 :return: New socket connection. 11:55:58 """ 11:55:58 try: 11:55:58 > sock = connection.create_connection( 11:55:58 (self._dns_host, self.port), 11:55:58 self.timeout, 11:55:58 source_address=self.source_address, 11:55:58 socket_options=self.socket_options, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:58 raise err 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:58 socket_options = [(6, 1, 1)] 11:55:58 11:55:58 def create_connection( 11:55:58 address: tuple[str, int], 11:55:58 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:58 source_address: tuple[str, int] | None = None, 11:55:58 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:58 ) -> socket.socket: 11:55:58 """Connect to *address* and return the socket object. 11:55:58 11:55:58 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:58 port)``) and return the socket object. Passing the optional 11:55:58 *timeout* parameter will set the timeout on the socket instance 11:55:58 before attempting to connect. If no *timeout* is supplied, the 11:55:58 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:58 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:58 for the socket to bind as a source address before making the connection. 11:55:58 An host of '' or port 0 tells the OS to use the default. 11:55:58 """ 11:55:58 11:55:58 host, port = address 11:55:58 if host.startswith("["): 11:55:58 host = host.strip("[]") 11:55:58 err = None 11:55:58 11:55:58 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:58 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:58 # The original create_connection function always returns all records. 11:55:58 family = allowed_gai_family() 11:55:58 11:55:58 try: 11:55:58 host.encode("idna") 11:55:58 except UnicodeError: 11:55:58 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:58 11:55:58 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:58 af, socktype, proto, canonname, sa = res 11:55:58 sock = None 11:55:58 try: 11:55:58 sock = socket.socket(af, socktype, proto) 11:55:58 11:55:58 # If provided, set socket level options before connecting. 11:55:58 _set_socket_options(sock, socket_options) 11:55:58 11:55:58 if timeout is not _DEFAULT_TIMEOUT: 11:55:58 sock.settimeout(timeout) 11:55:58 if source_address: 11:55:58 sock.bind(source_address) 11:55:58 > sock.connect(sa) 11:55:58 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:58 11:55:58 The above exception was the direct cause of the following exception: 11:55:58 11:55:58 self = 11:55:58 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:58 body = '{"input": {"topology-id": "747c670e-7a07-3dab-b379-5b1cd17402a3"}}' 11:55:58 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '66', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:58 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:58 redirect = False, assert_same_host = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:58 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:58 decode_content = False, response_kw = {} 11:55:58 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/tapi-topology:get-topology-details', query=None, fragment=None) 11:55:58 destination_scheme = None, conn = None, release_this_conn = True 11:55:58 http_tunnel_required = False, err = None, clean_exit = False 11:55:58 11:55:58 def urlopen( # type: ignore[override] 11:55:58 self, 11:55:58 method: str, 11:55:58 url: str, 11:55:58 body: _TYPE_BODY | None = None, 11:55:58 headers: typing.Mapping[str, str] | None = None, 11:55:58 retries: Retry | bool | int | None = None, 11:55:58 redirect: bool = True, 11:55:58 assert_same_host: bool = True, 11:55:58 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:58 pool_timeout: int | None = None, 11:55:58 release_conn: bool | None = None, 11:55:58 chunked: bool = False, 11:55:58 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:58 preload_content: bool = True, 11:55:58 decode_content: bool = True, 11:55:58 **response_kw: typing.Any, 11:55:58 ) -> BaseHTTPResponse: 11:55:58 """ 11:55:58 Get a connection from the pool and perform an HTTP request. This is the 11:55:58 lowest level call for making a request, so you'll need to specify all 11:55:58 the raw details. 11:55:58 11:55:58 .. note:: 11:55:58 11:55:58 More commonly, it's appropriate to use a convenience method 11:55:58 such as :meth:`request`. 11:55:58 11:55:58 .. note:: 11:55:58 11:55:58 `release_conn` will only behave as expected if 11:55:58 `preload_content=False` because we want to make 11:55:58 `preload_content=False` the default behaviour someday soon without 11:55:58 breaking backwards compatibility. 11:55:58 11:55:58 :param method: 11:55:58 HTTP request method (such as GET, POST, PUT, etc.) 11:55:58 11:55:58 :param url: 11:55:58 The URL to perform the request on. 11:55:58 11:55:58 :param body: 11:55:58 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:58 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:58 11:55:58 :param headers: 11:55:58 Dictionary of custom headers to send, such as User-Agent, 11:55:58 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:58 these headers completely replace any pool-specific headers. 11:55:58 11:55:58 :param retries: 11:55:58 Configure the number of retries to allow before raising a 11:55:58 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:58 11:55:58 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:58 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:58 over different types of retries. 11:55:58 Pass an integer number to retry connection errors that many times, 11:55:58 but no other types of errors. Pass zero to never retry. 11:55:58 11:55:58 If ``False``, then retries are disabled and any exception is raised 11:55:58 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:58 the redirect response will be returned. 11:55:58 11:55:58 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:58 11:55:58 :param redirect: 11:55:58 If True, automatically handle redirects (status codes 301, 302, 11:55:58 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:58 will disable redirect, too. 11:55:58 11:55:58 :param assert_same_host: 11:55:58 If ``True``, will make sure that the host of the pool requests is 11:55:58 consistent else will raise HostChangedError. When ``False``, you can 11:55:58 use the pool on an HTTP proxy and request foreign hosts. 11:55:58 11:55:58 :param timeout: 11:55:58 If specified, overrides the default timeout for this one 11:55:58 request. It may be a float (in seconds) or an instance of 11:55:58 :class:`urllib3.util.Timeout`. 11:55:58 11:55:58 :param pool_timeout: 11:55:58 If set and the pool is set to block=True, then this method will 11:55:58 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:58 connection is available within the time period. 11:55:58 11:55:58 :param bool preload_content: 11:55:58 If True, the response's body will be preloaded into memory. 11:55:58 11:55:58 :param bool decode_content: 11:55:58 If True, will attempt to decode the body based on the 11:55:58 'content-encoding' header. 11:55:58 11:55:58 :param release_conn: 11:55:58 If False, then the urlopen call will not release the connection 11:55:58 back into the pool once a response is received (but will release if 11:55:58 you read the entire contents of the response such as when 11:55:58 `preload_content=True`). This is useful if you're not preloading 11:55:58 the response's content immediately. You will need to call 11:55:58 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:58 back into the pool. If None, it takes the value of ``preload_content`` 11:55:58 which defaults to ``True``. 11:55:58 11:55:58 :param bool chunked: 11:55:58 If True, urllib3 will send the body using chunked transfer 11:55:58 encoding. Otherwise, urllib3 will send the body using the standard 11:55:58 content-length form. Defaults to False. 11:55:58 11:55:58 :param int body_pos: 11:55:58 Position to seek to in file-like body in the event of a retry or 11:55:58 redirect. Typically this won't need to be set because urllib3 will 11:55:58 auto-populate the value when needed. 11:55:58 """ 11:55:58 parsed_url = parse_url(url) 11:55:58 destination_scheme = parsed_url.scheme 11:55:58 11:55:58 if headers is None: 11:55:58 headers = self.headers 11:55:58 11:55:58 if not isinstance(retries, Retry): 11:55:58 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:58 11:55:58 if release_conn is None: 11:55:58 release_conn = preload_content 11:55:58 11:55:58 # Check host 11:55:58 if assert_same_host and not self.is_same_host(url): 11:55:58 raise HostChangedError(self, url, retries) 11:55:58 11:55:58 # Ensure that the URL we're connecting to is properly encoded 11:55:58 if url.startswith("/"): 11:55:58 url = to_str(_encode_target(url)) 11:55:58 else: 11:55:58 url = to_str(parsed_url.url) 11:55:58 11:55:58 conn = None 11:55:58 11:55:58 # Track whether `conn` needs to be released before 11:55:58 # returning/raising/recursing. Update this variable if necessary, and 11:55:58 # leave `release_conn` constant throughout the function. That way, if 11:55:58 # the function recurses, the original value of `release_conn` will be 11:55:58 # passed down into the recursive call, and its value will be respected. 11:55:58 # 11:55:58 # See issue #651 [1] for details. 11:55:58 # 11:55:58 # [1] 11:55:58 release_this_conn = release_conn 11:55:58 11:55:58 http_tunnel_required = connection_requires_http_tunnel( 11:55:58 self.proxy, self.proxy_config, destination_scheme 11:55:58 ) 11:55:58 11:55:58 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:58 # have to copy the headers dict so we can safely change it without those 11:55:58 # changes being reflected in anyone else's copy. 11:55:58 if not http_tunnel_required: 11:55:58 headers = headers.copy() # type: ignore[attr-defined] 11:55:58 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:58 11:55:58 # Must keep the exception bound to a separate variable or else Python 3 11:55:58 # complains about UnboundLocalError. 11:55:58 err = None 11:55:58 11:55:58 # Keep track of whether we cleanly exited the except block. This 11:55:58 # ensures we do proper cleanup in finally. 11:55:58 clean_exit = False 11:55:58 11:55:58 # Rewind body position, if needed. Record current position 11:55:58 # for future rewinds in the event of a redirect/retry. 11:55:58 body_pos = set_file_position(body, body_pos) 11:55:58 11:55:58 try: 11:55:58 # Request a connection from the queue. 11:55:58 timeout_obj = self._get_timeout(timeout) 11:55:58 conn = self._get_conn(timeout=pool_timeout) 11:55:58 11:55:58 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:58 11:55:58 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:58 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:58 try: 11:55:58 self._prepare_proxy(conn) 11:55:58 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:58 self._raise_timeout( 11:55:58 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:58 ) 11:55:58 raise 11:55:58 11:55:58 # If we're going to release the connection in ``finally:``, then 11:55:58 # the response doesn't need to know about the connection. Otherwise 11:55:58 # it will also try to release it and we'll have a double-release 11:55:58 # mess. 11:55:58 response_conn = conn if not release_conn else None 11:55:58 11:55:58 # Make the request on the HTTPConnection object 11:55:58 > response = self._make_request( 11:55:58 conn, 11:55:58 method, 11:55:58 url, 11:55:58 timeout=timeout_obj, 11:55:58 body=body, 11:55:58 headers=headers, 11:55:58 chunked=chunked, 11:55:58 retries=retries, 11:55:58 response_conn=response_conn, 11:55:58 preload_content=preload_content, 11:55:58 decode_content=decode_content, 11:55:58 **response_kw, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:58 conn.request( 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:58 self.endheaders() 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:58 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:58 self.send(msg) 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:58 self.connect() 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:58 self.sock = self._new_conn() 11:55:58 ^^^^^^^^^^^^^^^^ 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = 11:55:58 11:55:58 def _new_conn(self) -> socket.socket: 11:55:58 """Establish a socket connection and set nodelay settings on it. 11:55:58 11:55:58 :return: New socket connection. 11:55:58 """ 11:55:58 try: 11:55:58 sock = connection.create_connection( 11:55:58 (self._dns_host, self.port), 11:55:58 self.timeout, 11:55:58 source_address=self.source_address, 11:55:58 socket_options=self.socket_options, 11:55:58 ) 11:55:58 except socket.gaierror as e: 11:55:58 raise NameResolutionError(self.host, self, e) from e 11:55:58 except SocketTimeout as e: 11:55:58 raise ConnectTimeoutError( 11:55:58 self, 11:55:58 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:58 ) from e 11:55:58 11:55:58 except OSError as e: 11:55:58 > raise NewConnectionError( 11:55:58 self, f"Failed to establish a new connection: {e}" 11:55:58 ) from e 11:55:58 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:58 11:55:58 The above exception was the direct cause of the following exception: 11:55:58 11:55:58 self = 11:55:58 request = , stream = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:58 proxies = OrderedDict() 11:55:58 11:55:58 def send( 11:55:58 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:58 ): 11:55:58 """Sends PreparedRequest object. Returns Response object. 11:55:58 11:55:58 :param request: The :class:`PreparedRequest ` being sent. 11:55:58 :param stream: (optional) Whether to stream the request content. 11:55:58 :param timeout: (optional) How long to wait for the server to send 11:55:58 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:58 read timeout) ` tuple. 11:55:58 :type timeout: float or tuple or urllib3 Timeout object 11:55:58 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:58 we verify the server's TLS certificate, or a string, in which case it 11:55:58 must be a path to a CA bundle to use 11:55:58 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:58 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:58 :rtype: requests.Response 11:55:58 """ 11:55:58 11:55:58 try: 11:55:58 conn = self.get_connection_with_tls_context( 11:55:58 request, verify, proxies=proxies, cert=cert 11:55:58 ) 11:55:58 except LocationValueError as e: 11:55:58 raise InvalidURL(e, request=request) 11:55:58 11:55:58 self.cert_verify(conn, request.url, verify, cert) 11:55:58 url = self.request_url(request, proxies) 11:55:58 self.add_headers( 11:55:58 request, 11:55:58 stream=stream, 11:55:58 timeout=timeout, 11:55:58 verify=verify, 11:55:58 cert=cert, 11:55:58 proxies=proxies, 11:55:58 ) 11:55:58 11:55:58 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:58 11:55:58 if isinstance(timeout, tuple): 11:55:58 try: 11:55:58 connect, read = timeout 11:55:58 timeout = TimeoutSauce(connect=connect, read=read) 11:55:58 except ValueError: 11:55:58 raise ValueError( 11:55:58 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:58 f"or a single float to set both timeouts to the same value." 11:55:58 ) 11:55:58 elif isinstance(timeout, TimeoutSauce): 11:55:58 pass 11:55:58 else: 11:55:58 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:58 11:55:58 try: 11:55:58 > resp = conn.urlopen( 11:55:58 method=request.method, 11:55:58 url=url, 11:55:58 body=request.body, 11:55:58 headers=request.headers, 11:55:58 redirect=False, 11:55:58 assert_same_host=False, 11:55:58 preload_content=False, 11:55:58 decode_content=False, 11:55:58 retries=self.max_retries, 11:55:58 timeout=timeout, 11:55:58 chunked=chunked, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:58 retries = retries.increment( 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:58 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:58 response = None 11:55:58 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:58 _pool = 11:55:58 _stacktrace = 11:55:58 11:55:58 def increment( 11:55:58 self, 11:55:58 method: str | None = None, 11:55:58 url: str | None = None, 11:55:58 response: BaseHTTPResponse | None = None, 11:55:58 error: Exception | None = None, 11:55:58 _pool: ConnectionPool | None = None, 11:55:58 _stacktrace: TracebackType | None = None, 11:55:58 ) -> Self: 11:55:58 """Return a new Retry object with incremented retry counters. 11:55:58 11:55:58 :param response: A response object, or None, if the server did not 11:55:58 return a response. 11:55:58 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:58 :param Exception error: An error encountered during the request, or 11:55:58 None if the response was received successfully. 11:55:58 11:55:58 :return: A new ``Retry`` object. 11:55:58 """ 11:55:58 if self.total is False and error: 11:55:58 # Disabled, indicate to re-raise the error. 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 11:55:58 total = self.total 11:55:58 if total is not None: 11:55:58 total -= 1 11:55:58 11:55:58 connect = self.connect 11:55:58 read = self.read 11:55:58 redirect = self.redirect 11:55:58 status_count = self.status 11:55:58 other = self.other 11:55:58 cause = "unknown" 11:55:58 status = None 11:55:58 redirect_location = None 11:55:58 11:55:58 if error and self._is_connection_error(error): 11:55:58 # Connect retry? 11:55:58 if connect is False: 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 elif connect is not None: 11:55:58 connect -= 1 11:55:58 11:55:58 elif error and self._is_read_error(error): 11:55:58 # Read retry? 11:55:58 if read is False or method is None or not self._is_method_retryable(method): 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 elif read is not None: 11:55:58 read -= 1 11:55:58 11:55:58 elif error: 11:55:58 # Other retry? 11:55:58 if other is not None: 11:55:58 other -= 1 11:55:58 11:55:58 elif response and response.get_redirect_location(): 11:55:58 # Redirect retry? 11:55:58 if redirect is not None: 11:55:58 redirect -= 1 11:55:58 cause = "too many redirects" 11:55:58 response_redirect_location = response.get_redirect_location() 11:55:58 if response_redirect_location: 11:55:58 redirect_location = response_redirect_location 11:55:58 status = response.status 11:55:58 11:55:58 else: 11:55:58 # Incrementing because of a server error like a 500 in 11:55:58 # status_forcelist and the given method is in the allowed_methods 11:55:58 cause = ResponseError.GENERIC_ERROR 11:55:58 if response and response.status: 11:55:58 if status_count is not None: 11:55:58 status_count -= 1 11:55:58 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:58 status = response.status 11:55:58 11:55:58 history = self.history + ( 11:55:58 RequestHistory(method, url, error, status, redirect_location), 11:55:58 ) 11:55:58 11:55:58 new_retry = self.new( 11:55:58 total=total, 11:55:58 connect=connect, 11:55:58 read=read, 11:55:58 redirect=redirect, 11:55:58 status=status_count, 11:55:58 other=other, 11:55:58 history=history, 11:55:58 ) 11:55:58 11:55:58 if new_retry.is_exhausted(): 11:55:58 reason = error or ResponseError(cause) 11:55:58 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:58 11:55:58 During handling of the above exception, another exception occurred: 11:55:58 11:55:58 self = 11:55:58 11:55:58 def test_02_get_tapi_topology_T0(self): 11:55:58 self.tapi_topo["topology-id"] = test_utils.T0_MULTILAYER_TOPO_UUID 11:55:58 > response = test_utils.transportpce_api_rpc_request( 11:55:58 'tapi-topology', 'get-topology-details', self.tapi_topo) 11:55:58 11:55:58 transportpce_tests/tapi/test01_abstracted_topology.py:198: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:58 response = post_request(url, data) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 transportpce_tests/common/test_utils.py:142: in post_request 11:55:58 return requests.request( 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:58 return session.request(method=method, url=url, **kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:58 resp = self.send(prep, **send_kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:58 r = adapter.send(request, **kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = 11:55:58 request = , stream = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:58 proxies = OrderedDict() 11:55:58 11:55:58 def send( 11:55:58 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:58 ): 11:55:58 """Sends PreparedRequest object. Returns Response object. 11:55:58 11:55:58 :param request: The :class:`PreparedRequest ` being sent. 11:55:58 :param stream: (optional) Whether to stream the request content. 11:55:58 :param timeout: (optional) How long to wait for the server to send 11:55:58 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:58 read timeout) ` tuple. 11:55:58 :type timeout: float or tuple or urllib3 Timeout object 11:55:58 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:58 we verify the server's TLS certificate, or a string, in which case it 11:55:58 must be a path to a CA bundle to use 11:55:58 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:58 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:58 :rtype: requests.Response 11:55:58 """ 11:55:58 11:55:58 try: 11:55:58 conn = self.get_connection_with_tls_context( 11:55:58 request, verify, proxies=proxies, cert=cert 11:55:58 ) 11:55:58 except LocationValueError as e: 11:55:58 raise InvalidURL(e, request=request) 11:55:58 11:55:58 self.cert_verify(conn, request.url, verify, cert) 11:55:58 url = self.request_url(request, proxies) 11:55:58 self.add_headers( 11:55:58 request, 11:55:58 stream=stream, 11:55:58 timeout=timeout, 11:55:58 verify=verify, 11:55:58 cert=cert, 11:55:58 proxies=proxies, 11:55:58 ) 11:55:58 11:55:58 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:58 11:55:58 if isinstance(timeout, tuple): 11:55:58 try: 11:55:58 connect, read = timeout 11:55:58 timeout = TimeoutSauce(connect=connect, read=read) 11:55:58 except ValueError: 11:55:58 raise ValueError( 11:55:58 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:58 f"or a single float to set both timeouts to the same value." 11:55:58 ) 11:55:58 elif isinstance(timeout, TimeoutSauce): 11:55:58 pass 11:55:58 else: 11:55:58 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:58 11:55:58 try: 11:55:58 resp = conn.urlopen( 11:55:58 method=request.method, 11:55:58 url=url, 11:55:58 body=request.body, 11:55:58 headers=request.headers, 11:55:58 redirect=False, 11:55:58 assert_same_host=False, 11:55:58 preload_content=False, 11:55:58 decode_content=False, 11:55:58 retries=self.max_retries, 11:55:58 timeout=timeout, 11:55:58 chunked=chunked, 11:55:58 ) 11:55:58 11:55:58 except (ProtocolError, OSError) as err: 11:55:58 raise ConnectionError(err, request=request) 11:55:58 11:55:58 except MaxRetryError as e: 11:55:58 if isinstance(e.reason, ConnectTimeoutError): 11:55:58 # TODO: Remove this in 3.0.0: see #2811 11:55:58 if not isinstance(e.reason, NewConnectionError): 11:55:58 raise ConnectTimeout(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, ResponseError): 11:55:58 raise RetryError(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, _ProxyError): 11:55:58 raise ProxyError(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, _SSLError): 11:55:58 # This branch is for urllib3 v1.22 and later. 11:55:58 raise SSLError(e, request=request) 11:55:58 11:55:58 > raise ConnectionError(e, request=request) 11:55:58 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:58 ----------------------------- Captured stdout call ----------------------------- 11:55:58 execution of test_02_get_tapi_topology_T0 11:55:58 __________________ TransportTapitesting.test_03_connect_rdmb ___________________ 11:55:58 11:55:58 self = 11:55:58 11:55:58 def _new_conn(self) -> socket.socket: 11:55:58 """Establish a socket connection and set nodelay settings on it. 11:55:58 11:55:58 :return: New socket connection. 11:55:58 """ 11:55:58 try: 11:55:58 > sock = connection.create_connection( 11:55:58 (self._dns_host, self.port), 11:55:58 self.timeout, 11:55:58 source_address=self.source_address, 11:55:58 socket_options=self.socket_options, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:58 raise err 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:58 socket_options = [(6, 1, 1)] 11:55:58 11:55:58 def create_connection( 11:55:58 address: tuple[str, int], 11:55:58 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:58 source_address: tuple[str, int] | None = None, 11:55:58 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:58 ) -> socket.socket: 11:55:58 """Connect to *address* and return the socket object. 11:55:58 11:55:58 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:58 port)``) and return the socket object. Passing the optional 11:55:58 *timeout* parameter will set the timeout on the socket instance 11:55:58 before attempting to connect. If no *timeout* is supplied, the 11:55:58 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:58 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:58 for the socket to bind as a source address before making the connection. 11:55:58 An host of '' or port 0 tells the OS to use the default. 11:55:58 """ 11:55:58 11:55:58 host, port = address 11:55:58 if host.startswith("["): 11:55:58 host = host.strip("[]") 11:55:58 err = None 11:55:58 11:55:58 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:58 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:58 # The original create_connection function always returns all records. 11:55:58 family = allowed_gai_family() 11:55:58 11:55:58 try: 11:55:58 host.encode("idna") 11:55:58 except UnicodeError: 11:55:58 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:58 11:55:58 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:58 af, socktype, proto, canonname, sa = res 11:55:58 sock = None 11:55:58 try: 11:55:58 sock = socket.socket(af, socktype, proto) 11:55:58 11:55:58 # If provided, set socket level options before connecting. 11:55:58 _set_socket_options(sock, socket_options) 11:55:58 11:55:58 if timeout is not _DEFAULT_TIMEOUT: 11:55:58 sock.settimeout(timeout) 11:55:58 if source_address: 11:55:58 sock.bind(source_address) 11:55:58 > sock.connect(sa) 11:55:58 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:58 11:55:58 The above exception was the direct cause of the following exception: 11:55:58 11:55:58 self = 11:55:58 method = 'PUT' 11:55:58 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-B1' 11:55:58 body = '{"node": [{"node-id": "ROADM-B1", "netconf-node-topology:netconf-node": {"netconf-node-topology:host": "127.0.0.1", "...ff-millis": 1800000, "netconf-node-topology:backoff-multiplier": 1.5, "netconf-node-topology:keepalive-delay": 120}}]}' 11:55:58 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '710', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:58 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:58 redirect = False, assert_same_host = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:58 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:58 decode_content = False, response_kw = {} 11:55:58 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-B1', query=None, fragment=None) 11:55:58 destination_scheme = None, conn = None, release_this_conn = True 11:55:58 http_tunnel_required = False, err = None, clean_exit = False 11:55:58 11:55:58 def urlopen( # type: ignore[override] 11:55:58 self, 11:55:58 method: str, 11:55:58 url: str, 11:55:58 body: _TYPE_BODY | None = None, 11:55:58 headers: typing.Mapping[str, str] | None = None, 11:55:58 retries: Retry | bool | int | None = None, 11:55:58 redirect: bool = True, 11:55:58 assert_same_host: bool = True, 11:55:58 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:58 pool_timeout: int | None = None, 11:55:58 release_conn: bool | None = None, 11:55:58 chunked: bool = False, 11:55:58 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:58 preload_content: bool = True, 11:55:58 decode_content: bool = True, 11:55:58 **response_kw: typing.Any, 11:55:58 ) -> BaseHTTPResponse: 11:55:58 """ 11:55:58 Get a connection from the pool and perform an HTTP request. This is the 11:55:58 lowest level call for making a request, so you'll need to specify all 11:55:58 the raw details. 11:55:58 11:55:58 .. note:: 11:55:58 11:55:58 More commonly, it's appropriate to use a convenience method 11:55:58 such as :meth:`request`. 11:55:58 11:55:58 .. note:: 11:55:58 11:55:58 `release_conn` will only behave as expected if 11:55:58 `preload_content=False` because we want to make 11:55:58 `preload_content=False` the default behaviour someday soon without 11:55:58 breaking backwards compatibility. 11:55:58 11:55:58 :param method: 11:55:58 HTTP request method (such as GET, POST, PUT, etc.) 11:55:58 11:55:58 :param url: 11:55:58 The URL to perform the request on. 11:55:58 11:55:58 :param body: 11:55:58 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:58 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:58 11:55:58 :param headers: 11:55:58 Dictionary of custom headers to send, such as User-Agent, 11:55:58 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:58 these headers completely replace any pool-specific headers. 11:55:58 11:55:58 :param retries: 11:55:58 Configure the number of retries to allow before raising a 11:55:58 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:58 11:55:58 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:58 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:58 over different types of retries. 11:55:58 Pass an integer number to retry connection errors that many times, 11:55:58 but no other types of errors. Pass zero to never retry. 11:55:58 11:55:58 If ``False``, then retries are disabled and any exception is raised 11:55:58 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:58 the redirect response will be returned. 11:55:58 11:55:58 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:58 11:55:58 :param redirect: 11:55:58 If True, automatically handle redirects (status codes 301, 302, 11:55:58 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:58 will disable redirect, too. 11:55:58 11:55:58 :param assert_same_host: 11:55:58 If ``True``, will make sure that the host of the pool requests is 11:55:58 consistent else will raise HostChangedError. When ``False``, you can 11:55:58 use the pool on an HTTP proxy and request foreign hosts. 11:55:58 11:55:58 :param timeout: 11:55:58 If specified, overrides the default timeout for this one 11:55:58 request. It may be a float (in seconds) or an instance of 11:55:58 :class:`urllib3.util.Timeout`. 11:55:58 11:55:58 :param pool_timeout: 11:55:58 If set and the pool is set to block=True, then this method will 11:55:58 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:58 connection is available within the time period. 11:55:58 11:55:58 :param bool preload_content: 11:55:58 If True, the response's body will be preloaded into memory. 11:55:58 11:55:58 :param bool decode_content: 11:55:58 If True, will attempt to decode the body based on the 11:55:58 'content-encoding' header. 11:55:58 11:55:58 :param release_conn: 11:55:58 If False, then the urlopen call will not release the connection 11:55:58 back into the pool once a response is received (but will release if 11:55:58 you read the entire contents of the response such as when 11:55:58 `preload_content=True`). This is useful if you're not preloading 11:55:58 the response's content immediately. You will need to call 11:55:58 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:58 back into the pool. If None, it takes the value of ``preload_content`` 11:55:58 which defaults to ``True``. 11:55:58 11:55:58 :param bool chunked: 11:55:58 If True, urllib3 will send the body using chunked transfer 11:55:58 encoding. Otherwise, urllib3 will send the body using the standard 11:55:58 content-length form. Defaults to False. 11:55:58 11:55:58 :param int body_pos: 11:55:58 Position to seek to in file-like body in the event of a retry or 11:55:58 redirect. Typically this won't need to be set because urllib3 will 11:55:58 auto-populate the value when needed. 11:55:58 """ 11:55:58 parsed_url = parse_url(url) 11:55:58 destination_scheme = parsed_url.scheme 11:55:58 11:55:58 if headers is None: 11:55:58 headers = self.headers 11:55:58 11:55:58 if not isinstance(retries, Retry): 11:55:58 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:58 11:55:58 if release_conn is None: 11:55:58 release_conn = preload_content 11:55:58 11:55:58 # Check host 11:55:58 if assert_same_host and not self.is_same_host(url): 11:55:58 raise HostChangedError(self, url, retries) 11:55:58 11:55:58 # Ensure that the URL we're connecting to is properly encoded 11:55:58 if url.startswith("/"): 11:55:58 url = to_str(_encode_target(url)) 11:55:58 else: 11:55:58 url = to_str(parsed_url.url) 11:55:58 11:55:58 conn = None 11:55:58 11:55:58 # Track whether `conn` needs to be released before 11:55:58 # returning/raising/recursing. Update this variable if necessary, and 11:55:58 # leave `release_conn` constant throughout the function. That way, if 11:55:58 # the function recurses, the original value of `release_conn` will be 11:55:58 # passed down into the recursive call, and its value will be respected. 11:55:58 # 11:55:58 # See issue #651 [1] for details. 11:55:58 # 11:55:58 # [1] 11:55:58 release_this_conn = release_conn 11:55:58 11:55:58 http_tunnel_required = connection_requires_http_tunnel( 11:55:58 self.proxy, self.proxy_config, destination_scheme 11:55:58 ) 11:55:58 11:55:58 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:58 # have to copy the headers dict so we can safely change it without those 11:55:58 # changes being reflected in anyone else's copy. 11:55:58 if not http_tunnel_required: 11:55:58 headers = headers.copy() # type: ignore[attr-defined] 11:55:58 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:58 11:55:58 # Must keep the exception bound to a separate variable or else Python 3 11:55:58 # complains about UnboundLocalError. 11:55:58 err = None 11:55:58 11:55:58 # Keep track of whether we cleanly exited the except block. This 11:55:58 # ensures we do proper cleanup in finally. 11:55:58 clean_exit = False 11:55:58 11:55:58 # Rewind body position, if needed. Record current position 11:55:58 # for future rewinds in the event of a redirect/retry. 11:55:58 body_pos = set_file_position(body, body_pos) 11:55:58 11:55:58 try: 11:55:58 # Request a connection from the queue. 11:55:58 timeout_obj = self._get_timeout(timeout) 11:55:58 conn = self._get_conn(timeout=pool_timeout) 11:55:58 11:55:58 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:58 11:55:58 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:58 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:58 try: 11:55:58 self._prepare_proxy(conn) 11:55:58 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:58 self._raise_timeout( 11:55:58 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:58 ) 11:55:58 raise 11:55:58 11:55:58 # If we're going to release the connection in ``finally:``, then 11:55:58 # the response doesn't need to know about the connection. Otherwise 11:55:58 # it will also try to release it and we'll have a double-release 11:55:58 # mess. 11:55:58 response_conn = conn if not release_conn else None 11:55:58 11:55:58 # Make the request on the HTTPConnection object 11:55:58 > response = self._make_request( 11:55:58 conn, 11:55:58 method, 11:55:58 url, 11:55:58 timeout=timeout_obj, 11:55:58 body=body, 11:55:58 headers=headers, 11:55:58 chunked=chunked, 11:55:58 retries=retries, 11:55:58 response_conn=response_conn, 11:55:58 preload_content=preload_content, 11:55:58 decode_content=decode_content, 11:55:58 **response_kw, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:58 conn.request( 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:58 self.endheaders() 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:58 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:58 self.send(msg) 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:58 self.connect() 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:58 self.sock = self._new_conn() 11:55:58 ^^^^^^^^^^^^^^^^ 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = 11:55:58 11:55:58 def _new_conn(self) -> socket.socket: 11:55:58 """Establish a socket connection and set nodelay settings on it. 11:55:58 11:55:58 :return: New socket connection. 11:55:58 """ 11:55:58 try: 11:55:58 sock = connection.create_connection( 11:55:58 (self._dns_host, self.port), 11:55:58 self.timeout, 11:55:58 source_address=self.source_address, 11:55:58 socket_options=self.socket_options, 11:55:58 ) 11:55:58 except socket.gaierror as e: 11:55:58 raise NameResolutionError(self.host, self, e) from e 11:55:58 except SocketTimeout as e: 11:55:58 raise ConnectTimeoutError( 11:55:58 self, 11:55:58 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:58 ) from e 11:55:58 11:55:58 except OSError as e: 11:55:58 > raise NewConnectionError( 11:55:58 self, f"Failed to establish a new connection: {e}" 11:55:58 ) from e 11:55:58 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:58 11:55:58 The above exception was the direct cause of the following exception: 11:55:58 11:55:58 self = 11:55:58 request = , stream = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:58 proxies = OrderedDict() 11:55:58 11:55:58 def send( 11:55:58 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:58 ): 11:55:58 """Sends PreparedRequest object. Returns Response object. 11:55:58 11:55:58 :param request: The :class:`PreparedRequest ` being sent. 11:55:58 :param stream: (optional) Whether to stream the request content. 11:55:58 :param timeout: (optional) How long to wait for the server to send 11:55:58 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:58 read timeout) ` tuple. 11:55:58 :type timeout: float or tuple or urllib3 Timeout object 11:55:58 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:58 we verify the server's TLS certificate, or a string, in which case it 11:55:58 must be a path to a CA bundle to use 11:55:58 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:58 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:58 :rtype: requests.Response 11:55:58 """ 11:55:58 11:55:58 try: 11:55:58 conn = self.get_connection_with_tls_context( 11:55:58 request, verify, proxies=proxies, cert=cert 11:55:58 ) 11:55:58 except LocationValueError as e: 11:55:58 raise InvalidURL(e, request=request) 11:55:58 11:55:58 self.cert_verify(conn, request.url, verify, cert) 11:55:58 url = self.request_url(request, proxies) 11:55:58 self.add_headers( 11:55:58 request, 11:55:58 stream=stream, 11:55:58 timeout=timeout, 11:55:58 verify=verify, 11:55:58 cert=cert, 11:55:58 proxies=proxies, 11:55:58 ) 11:55:58 11:55:58 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:58 11:55:58 if isinstance(timeout, tuple): 11:55:58 try: 11:55:58 connect, read = timeout 11:55:58 timeout = TimeoutSauce(connect=connect, read=read) 11:55:58 except ValueError: 11:55:58 raise ValueError( 11:55:58 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:58 f"or a single float to set both timeouts to the same value." 11:55:58 ) 11:55:58 elif isinstance(timeout, TimeoutSauce): 11:55:58 pass 11:55:58 else: 11:55:58 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:58 11:55:58 try: 11:55:58 > resp = conn.urlopen( 11:55:58 method=request.method, 11:55:58 url=url, 11:55:58 body=request.body, 11:55:58 headers=request.headers, 11:55:58 redirect=False, 11:55:58 assert_same_host=False, 11:55:58 preload_content=False, 11:55:58 decode_content=False, 11:55:58 retries=self.max_retries, 11:55:58 timeout=timeout, 11:55:58 chunked=chunked, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:58 retries = retries.increment( 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:58 method = 'PUT' 11:55:58 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-B1' 11:55:58 response = None 11:55:58 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:58 _pool = 11:55:58 _stacktrace = 11:55:58 11:55:58 def increment( 11:55:58 self, 11:55:58 method: str | None = None, 11:55:58 url: str | None = None, 11:55:58 response: BaseHTTPResponse | None = None, 11:55:58 error: Exception | None = None, 11:55:58 _pool: ConnectionPool | None = None, 11:55:58 _stacktrace: TracebackType | None = None, 11:55:58 ) -> Self: 11:55:58 """Return a new Retry object with incremented retry counters. 11:55:58 11:55:58 :param response: A response object, or None, if the server did not 11:55:58 return a response. 11:55:58 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:58 :param Exception error: An error encountered during the request, or 11:55:58 None if the response was received successfully. 11:55:58 11:55:58 :return: A new ``Retry`` object. 11:55:58 """ 11:55:58 if self.total is False and error: 11:55:58 # Disabled, indicate to re-raise the error. 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 11:55:58 total = self.total 11:55:58 if total is not None: 11:55:58 total -= 1 11:55:58 11:55:58 connect = self.connect 11:55:58 read = self.read 11:55:58 redirect = self.redirect 11:55:58 status_count = self.status 11:55:58 other = self.other 11:55:58 cause = "unknown" 11:55:58 status = None 11:55:58 redirect_location = None 11:55:58 11:55:58 if error and self._is_connection_error(error): 11:55:58 # Connect retry? 11:55:58 if connect is False: 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 elif connect is not None: 11:55:58 connect -= 1 11:55:58 11:55:58 elif error and self._is_read_error(error): 11:55:58 # Read retry? 11:55:58 if read is False or method is None or not self._is_method_retryable(method): 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 elif read is not None: 11:55:58 read -= 1 11:55:58 11:55:58 elif error: 11:55:58 # Other retry? 11:55:58 if other is not None: 11:55:58 other -= 1 11:55:58 11:55:58 elif response and response.get_redirect_location(): 11:55:58 # Redirect retry? 11:55:58 if redirect is not None: 11:55:58 redirect -= 1 11:55:58 cause = "too many redirects" 11:55:58 response_redirect_location = response.get_redirect_location() 11:55:58 if response_redirect_location: 11:55:58 redirect_location = response_redirect_location 11:55:58 status = response.status 11:55:58 11:55:58 else: 11:55:58 # Incrementing because of a server error like a 500 in 11:55:58 # status_forcelist and the given method is in the allowed_methods 11:55:58 cause = ResponseError.GENERIC_ERROR 11:55:58 if response and response.status: 11:55:58 if status_count is not None: 11:55:58 status_count -= 1 11:55:58 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:58 status = response.status 11:55:58 11:55:58 history = self.history + ( 11:55:58 RequestHistory(method, url, error, status, redirect_location), 11:55:58 ) 11:55:58 11:55:58 new_retry = self.new( 11:55:58 total=total, 11:55:58 connect=connect, 11:55:58 read=read, 11:55:58 redirect=redirect, 11:55:58 status=status_count, 11:55:58 other=other, 11:55:58 history=history, 11:55:58 ) 11:55:58 11:55:58 if new_retry.is_exhausted(): 11:55:58 reason = error or ResponseError(cause) 11:55:58 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-B1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:58 11:55:58 During handling of the above exception, another exception occurred: 11:55:58 11:55:58 self = 11:55:58 11:55:58 def test_03_connect_rdmb(self): 11:55:58 > response = test_utils.mount_device("ROADM-B1", ('roadmb', self.NODE_VERSION)) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 11:55:58 transportpce_tests/tapi/test01_abstracted_topology.py:205: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 transportpce_tests/common/test_utils.py:362: in mount_device 11:55:58 response = put_request(url[RESTCONF_VERSION].format('{}', node), body) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 transportpce_tests/common/test_utils.py:124: in put_request 11:55:58 return requests.request( 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:58 return session.request(method=method, url=url, **kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:58 resp = self.send(prep, **send_kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:58 r = adapter.send(request, **kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = 11:55:58 request = , stream = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:58 proxies = OrderedDict() 11:55:58 11:55:58 def send( 11:55:58 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:58 ): 11:55:58 """Sends PreparedRequest object. Returns Response object. 11:55:58 11:55:58 :param request: The :class:`PreparedRequest ` being sent. 11:55:58 :param stream: (optional) Whether to stream the request content. 11:55:58 :param timeout: (optional) How long to wait for the server to send 11:55:58 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:58 read timeout) ` tuple. 11:55:58 :type timeout: float or tuple or urllib3 Timeout object 11:55:58 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:58 we verify the server's TLS certificate, or a string, in which case it 11:55:58 must be a path to a CA bundle to use 11:55:58 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:58 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:58 :rtype: requests.Response 11:55:58 """ 11:55:58 11:55:58 try: 11:55:58 conn = self.get_connection_with_tls_context( 11:55:58 request, verify, proxies=proxies, cert=cert 11:55:58 ) 11:55:58 except LocationValueError as e: 11:55:58 raise InvalidURL(e, request=request) 11:55:58 11:55:58 self.cert_verify(conn, request.url, verify, cert) 11:55:58 url = self.request_url(request, proxies) 11:55:58 self.add_headers( 11:55:58 request, 11:55:58 stream=stream, 11:55:58 timeout=timeout, 11:55:58 verify=verify, 11:55:58 cert=cert, 11:55:58 proxies=proxies, 11:55:58 ) 11:55:58 11:55:58 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:58 11:55:58 if isinstance(timeout, tuple): 11:55:58 try: 11:55:58 connect, read = timeout 11:55:58 timeout = TimeoutSauce(connect=connect, read=read) 11:55:58 except ValueError: 11:55:58 raise ValueError( 11:55:58 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:58 f"or a single float to set both timeouts to the same value." 11:55:58 ) 11:55:58 elif isinstance(timeout, TimeoutSauce): 11:55:58 pass 11:55:58 else: 11:55:58 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:58 11:55:58 try: 11:55:58 resp = conn.urlopen( 11:55:58 method=request.method, 11:55:58 url=url, 11:55:58 body=request.body, 11:55:58 headers=request.headers, 11:55:58 redirect=False, 11:55:58 assert_same_host=False, 11:55:58 preload_content=False, 11:55:58 decode_content=False, 11:55:58 retries=self.max_retries, 11:55:58 timeout=timeout, 11:55:58 chunked=chunked, 11:55:58 ) 11:55:58 11:55:58 except (ProtocolError, OSError) as err: 11:55:58 raise ConnectionError(err, request=request) 11:55:58 11:55:58 except MaxRetryError as e: 11:55:58 if isinstance(e.reason, ConnectTimeoutError): 11:55:58 # TODO: Remove this in 3.0.0: see #2811 11:55:58 if not isinstance(e.reason, NewConnectionError): 11:55:58 raise ConnectTimeout(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, ResponseError): 11:55:58 raise RetryError(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, _ProxyError): 11:55:58 raise ProxyError(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, _SSLError): 11:55:58 # This branch is for urllib3 v1.22 and later. 11:55:58 raise SSLError(e, request=request) 11:55:58 11:55:58 > raise ConnectionError(e, request=request) 11:55:58 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-B1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:58 ----------------------------- Captured stdout call ----------------------------- 11:55:58 execution of test_03_connect_rdmb 11:55:58 ________________ TransportTapitesting.test_04_check_tapi_topos _________________ 11:55:58 11:55:58 self = 11:55:58 11:55:58 def _new_conn(self) -> socket.socket: 11:55:58 """Establish a socket connection and set nodelay settings on it. 11:55:58 11:55:58 :return: New socket connection. 11:55:58 """ 11:55:58 try: 11:55:58 > sock = connection.create_connection( 11:55:58 (self._dns_host, self.port), 11:55:58 self.timeout, 11:55:58 source_address=self.source_address, 11:55:58 socket_options=self.socket_options, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:58 raise err 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:58 socket_options = [(6, 1, 1)] 11:55:58 11:55:58 def create_connection( 11:55:58 address: tuple[str, int], 11:55:58 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:58 source_address: tuple[str, int] | None = None, 11:55:58 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:58 ) -> socket.socket: 11:55:58 """Connect to *address* and return the socket object. 11:55:58 11:55:58 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:58 port)``) and return the socket object. Passing the optional 11:55:58 *timeout* parameter will set the timeout on the socket instance 11:55:58 before attempting to connect. If no *timeout* is supplied, the 11:55:58 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:58 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:58 for the socket to bind as a source address before making the connection. 11:55:58 An host of '' or port 0 tells the OS to use the default. 11:55:58 """ 11:55:58 11:55:58 host, port = address 11:55:58 if host.startswith("["): 11:55:58 host = host.strip("[]") 11:55:58 err = None 11:55:58 11:55:58 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:58 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:58 # The original create_connection function always returns all records. 11:55:58 family = allowed_gai_family() 11:55:58 11:55:58 try: 11:55:58 host.encode("idna") 11:55:58 except UnicodeError: 11:55:58 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:58 11:55:58 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:58 af, socktype, proto, canonname, sa = res 11:55:58 sock = None 11:55:58 try: 11:55:58 sock = socket.socket(af, socktype, proto) 11:55:58 11:55:58 # If provided, set socket level options before connecting. 11:55:58 _set_socket_options(sock, socket_options) 11:55:58 11:55:58 if timeout is not _DEFAULT_TIMEOUT: 11:55:58 sock.settimeout(timeout) 11:55:58 if source_address: 11:55:58 sock.bind(source_address) 11:55:58 > sock.connect(sa) 11:55:58 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:58 11:55:58 The above exception was the direct cause of the following exception: 11:55:58 11:55:58 self = 11:55:58 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:58 body = '{"input": {"topology-id": "cf51c729-3699-308a-a7d0-594c6a62ebbb"}}' 11:55:58 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '66', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:58 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:58 redirect = False, assert_same_host = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:58 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:58 decode_content = False, response_kw = {} 11:55:58 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/tapi-topology:get-topology-details', query=None, fragment=None) 11:55:58 destination_scheme = None, conn = None, release_this_conn = True 11:55:58 http_tunnel_required = False, err = None, clean_exit = False 11:55:58 11:55:58 def urlopen( # type: ignore[override] 11:55:58 self, 11:55:58 method: str, 11:55:58 url: str, 11:55:58 body: _TYPE_BODY | None = None, 11:55:58 headers: typing.Mapping[str, str] | None = None, 11:55:58 retries: Retry | bool | int | None = None, 11:55:58 redirect: bool = True, 11:55:58 assert_same_host: bool = True, 11:55:58 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:58 pool_timeout: int | None = None, 11:55:58 release_conn: bool | None = None, 11:55:58 chunked: bool = False, 11:55:58 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:58 preload_content: bool = True, 11:55:58 decode_content: bool = True, 11:55:58 **response_kw: typing.Any, 11:55:58 ) -> BaseHTTPResponse: 11:55:58 """ 11:55:58 Get a connection from the pool and perform an HTTP request. This is the 11:55:58 lowest level call for making a request, so you'll need to specify all 11:55:58 the raw details. 11:55:58 11:55:58 .. note:: 11:55:58 11:55:58 More commonly, it's appropriate to use a convenience method 11:55:58 such as :meth:`request`. 11:55:58 11:55:58 .. note:: 11:55:58 11:55:58 `release_conn` will only behave as expected if 11:55:58 `preload_content=False` because we want to make 11:55:58 `preload_content=False` the default behaviour someday soon without 11:55:58 breaking backwards compatibility. 11:55:58 11:55:58 :param method: 11:55:58 HTTP request method (such as GET, POST, PUT, etc.) 11:55:58 11:55:58 :param url: 11:55:58 The URL to perform the request on. 11:55:58 11:55:58 :param body: 11:55:58 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:58 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:58 11:55:58 :param headers: 11:55:58 Dictionary of custom headers to send, such as User-Agent, 11:55:58 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:58 these headers completely replace any pool-specific headers. 11:55:58 11:55:58 :param retries: 11:55:58 Configure the number of retries to allow before raising a 11:55:58 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:58 11:55:58 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:58 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:58 over different types of retries. 11:55:58 Pass an integer number to retry connection errors that many times, 11:55:58 but no other types of errors. Pass zero to never retry. 11:55:58 11:55:58 If ``False``, then retries are disabled and any exception is raised 11:55:58 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:58 the redirect response will be returned. 11:55:58 11:55:58 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:58 11:55:58 :param redirect: 11:55:58 If True, automatically handle redirects (status codes 301, 302, 11:55:58 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:58 will disable redirect, too. 11:55:58 11:55:58 :param assert_same_host: 11:55:58 If ``True``, will make sure that the host of the pool requests is 11:55:58 consistent else will raise HostChangedError. When ``False``, you can 11:55:58 use the pool on an HTTP proxy and request foreign hosts. 11:55:58 11:55:58 :param timeout: 11:55:58 If specified, overrides the default timeout for this one 11:55:58 request. It may be a float (in seconds) or an instance of 11:55:58 :class:`urllib3.util.Timeout`. 11:55:58 11:55:58 :param pool_timeout: 11:55:58 If set and the pool is set to block=True, then this method will 11:55:58 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:58 connection is available within the time period. 11:55:58 11:55:58 :param bool preload_content: 11:55:58 If True, the response's body will be preloaded into memory. 11:55:58 11:55:58 :param bool decode_content: 11:55:58 If True, will attempt to decode the body based on the 11:55:58 'content-encoding' header. 11:55:58 11:55:58 :param release_conn: 11:55:58 If False, then the urlopen call will not release the connection 11:55:58 back into the pool once a response is received (but will release if 11:55:58 you read the entire contents of the response such as when 11:55:58 `preload_content=True`). This is useful if you're not preloading 11:55:58 the response's content immediately. You will need to call 11:55:58 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:58 back into the pool. If None, it takes the value of ``preload_content`` 11:55:58 which defaults to ``True``. 11:55:58 11:55:58 :param bool chunked: 11:55:58 If True, urllib3 will send the body using chunked transfer 11:55:58 encoding. Otherwise, urllib3 will send the body using the standard 11:55:58 content-length form. Defaults to False. 11:55:58 11:55:58 :param int body_pos: 11:55:58 Position to seek to in file-like body in the event of a retry or 11:55:58 redirect. Typically this won't need to be set because urllib3 will 11:55:58 auto-populate the value when needed. 11:55:58 """ 11:55:58 parsed_url = parse_url(url) 11:55:58 destination_scheme = parsed_url.scheme 11:55:58 11:55:58 if headers is None: 11:55:58 headers = self.headers 11:55:58 11:55:58 if not isinstance(retries, Retry): 11:55:58 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:58 11:55:58 if release_conn is None: 11:55:58 release_conn = preload_content 11:55:58 11:55:58 # Check host 11:55:58 if assert_same_host and not self.is_same_host(url): 11:55:58 raise HostChangedError(self, url, retries) 11:55:58 11:55:58 # Ensure that the URL we're connecting to is properly encoded 11:55:58 if url.startswith("/"): 11:55:58 url = to_str(_encode_target(url)) 11:55:58 else: 11:55:58 url = to_str(parsed_url.url) 11:55:58 11:55:58 conn = None 11:55:58 11:55:58 # Track whether `conn` needs to be released before 11:55:58 # returning/raising/recursing. Update this variable if necessary, and 11:55:58 # leave `release_conn` constant throughout the function. That way, if 11:55:58 # the function recurses, the original value of `release_conn` will be 11:55:58 # passed down into the recursive call, and its value will be respected. 11:55:58 # 11:55:58 # See issue #651 [1] for details. 11:55:58 # 11:55:58 # [1] 11:55:58 release_this_conn = release_conn 11:55:58 11:55:58 http_tunnel_required = connection_requires_http_tunnel( 11:55:58 self.proxy, self.proxy_config, destination_scheme 11:55:58 ) 11:55:58 11:55:58 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:58 # have to copy the headers dict so we can safely change it without those 11:55:58 # changes being reflected in anyone else's copy. 11:55:58 if not http_tunnel_required: 11:55:58 headers = headers.copy() # type: ignore[attr-defined] 11:55:58 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:58 11:55:58 # Must keep the exception bound to a separate variable or else Python 3 11:55:58 # complains about UnboundLocalError. 11:55:58 err = None 11:55:58 11:55:58 # Keep track of whether we cleanly exited the except block. This 11:55:58 # ensures we do proper cleanup in finally. 11:55:58 clean_exit = False 11:55:58 11:55:58 # Rewind body position, if needed. Record current position 11:55:58 # for future rewinds in the event of a redirect/retry. 11:55:58 body_pos = set_file_position(body, body_pos) 11:55:58 11:55:58 try: 11:55:58 # Request a connection from the queue. 11:55:58 timeout_obj = self._get_timeout(timeout) 11:55:58 conn = self._get_conn(timeout=pool_timeout) 11:55:58 11:55:58 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:58 11:55:58 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:58 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:58 try: 11:55:58 self._prepare_proxy(conn) 11:55:58 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:58 self._raise_timeout( 11:55:58 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:58 ) 11:55:58 raise 11:55:58 11:55:58 # If we're going to release the connection in ``finally:``, then 11:55:58 # the response doesn't need to know about the connection. Otherwise 11:55:58 # it will also try to release it and we'll have a double-release 11:55:58 # mess. 11:55:58 response_conn = conn if not release_conn else None 11:55:58 11:55:58 # Make the request on the HTTPConnection object 11:55:58 > response = self._make_request( 11:55:58 conn, 11:55:58 method, 11:55:58 url, 11:55:58 timeout=timeout_obj, 11:55:58 body=body, 11:55:58 headers=headers, 11:55:58 chunked=chunked, 11:55:58 retries=retries, 11:55:58 response_conn=response_conn, 11:55:58 preload_content=preload_content, 11:55:58 decode_content=decode_content, 11:55:58 **response_kw, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:58 conn.request( 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:58 self.endheaders() 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:58 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:58 self.send(msg) 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:58 self.connect() 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:58 self.sock = self._new_conn() 11:55:58 ^^^^^^^^^^^^^^^^ 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = 11:55:58 11:55:58 def _new_conn(self) -> socket.socket: 11:55:58 """Establish a socket connection and set nodelay settings on it. 11:55:58 11:55:58 :return: New socket connection. 11:55:58 """ 11:55:58 try: 11:55:58 sock = connection.create_connection( 11:55:58 (self._dns_host, self.port), 11:55:58 self.timeout, 11:55:58 source_address=self.source_address, 11:55:58 socket_options=self.socket_options, 11:55:58 ) 11:55:58 except socket.gaierror as e: 11:55:58 raise NameResolutionError(self.host, self, e) from e 11:55:58 except SocketTimeout as e: 11:55:58 raise ConnectTimeoutError( 11:55:58 self, 11:55:58 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:58 ) from e 11:55:58 11:55:58 except OSError as e: 11:55:58 > raise NewConnectionError( 11:55:58 self, f"Failed to establish a new connection: {e}" 11:55:58 ) from e 11:55:58 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:58 11:55:58 The above exception was the direct cause of the following exception: 11:55:58 11:55:58 self = 11:55:58 request = , stream = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:58 proxies = OrderedDict() 11:55:58 11:55:58 def send( 11:55:58 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:58 ): 11:55:58 """Sends PreparedRequest object. Returns Response object. 11:55:58 11:55:58 :param request: The :class:`PreparedRequest ` being sent. 11:55:58 :param stream: (optional) Whether to stream the request content. 11:55:58 :param timeout: (optional) How long to wait for the server to send 11:55:58 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:58 read timeout) ` tuple. 11:55:58 :type timeout: float or tuple or urllib3 Timeout object 11:55:58 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:58 we verify the server's TLS certificate, or a string, in which case it 11:55:58 must be a path to a CA bundle to use 11:55:58 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:58 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:58 :rtype: requests.Response 11:55:58 """ 11:55:58 11:55:58 try: 11:55:58 conn = self.get_connection_with_tls_context( 11:55:58 request, verify, proxies=proxies, cert=cert 11:55:58 ) 11:55:58 except LocationValueError as e: 11:55:58 raise InvalidURL(e, request=request) 11:55:58 11:55:58 self.cert_verify(conn, request.url, verify, cert) 11:55:58 url = self.request_url(request, proxies) 11:55:58 self.add_headers( 11:55:58 request, 11:55:58 stream=stream, 11:55:58 timeout=timeout, 11:55:58 verify=verify, 11:55:58 cert=cert, 11:55:58 proxies=proxies, 11:55:58 ) 11:55:58 11:55:58 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:58 11:55:58 if isinstance(timeout, tuple): 11:55:58 try: 11:55:58 connect, read = timeout 11:55:58 timeout = TimeoutSauce(connect=connect, read=read) 11:55:58 except ValueError: 11:55:58 raise ValueError( 11:55:58 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:58 f"or a single float to set both timeouts to the same value." 11:55:58 ) 11:55:58 elif isinstance(timeout, TimeoutSauce): 11:55:58 pass 11:55:58 else: 11:55:58 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:58 11:55:58 try: 11:55:58 > resp = conn.urlopen( 11:55:58 method=request.method, 11:55:58 url=url, 11:55:58 body=request.body, 11:55:58 headers=request.headers, 11:55:58 redirect=False, 11:55:58 assert_same_host=False, 11:55:58 preload_content=False, 11:55:58 decode_content=False, 11:55:58 retries=self.max_retries, 11:55:58 timeout=timeout, 11:55:58 chunked=chunked, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:58 retries = retries.increment( 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:58 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:58 response = None 11:55:58 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:58 _pool = 11:55:58 _stacktrace = 11:55:58 11:55:58 def increment( 11:55:58 self, 11:55:58 method: str | None = None, 11:55:58 url: str | None = None, 11:55:58 response: BaseHTTPResponse | None = None, 11:55:58 error: Exception | None = None, 11:55:58 _pool: ConnectionPool | None = None, 11:55:58 _stacktrace: TracebackType | None = None, 11:55:58 ) -> Self: 11:55:58 """Return a new Retry object with incremented retry counters. 11:55:58 11:55:58 :param response: A response object, or None, if the server did not 11:55:58 return a response. 11:55:58 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:58 :param Exception error: An error encountered during the request, or 11:55:58 None if the response was received successfully. 11:55:58 11:55:58 :return: A new ``Retry`` object. 11:55:58 """ 11:55:58 if self.total is False and error: 11:55:58 # Disabled, indicate to re-raise the error. 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 11:55:58 total = self.total 11:55:58 if total is not None: 11:55:58 total -= 1 11:55:58 11:55:58 connect = self.connect 11:55:58 read = self.read 11:55:58 redirect = self.redirect 11:55:58 status_count = self.status 11:55:58 other = self.other 11:55:58 cause = "unknown" 11:55:58 status = None 11:55:58 redirect_location = None 11:55:58 11:55:58 if error and self._is_connection_error(error): 11:55:58 # Connect retry? 11:55:58 if connect is False: 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 elif connect is not None: 11:55:58 connect -= 1 11:55:58 11:55:58 elif error and self._is_read_error(error): 11:55:58 # Read retry? 11:55:58 if read is False or method is None or not self._is_method_retryable(method): 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 elif read is not None: 11:55:58 read -= 1 11:55:58 11:55:58 elif error: 11:55:58 # Other retry? 11:55:58 if other is not None: 11:55:58 other -= 1 11:55:58 11:55:58 elif response and response.get_redirect_location(): 11:55:58 # Redirect retry? 11:55:58 if redirect is not None: 11:55:58 redirect -= 1 11:55:58 cause = "too many redirects" 11:55:58 response_redirect_location = response.get_redirect_location() 11:55:58 if response_redirect_location: 11:55:58 redirect_location = response_redirect_location 11:55:58 status = response.status 11:55:58 11:55:58 else: 11:55:58 # Incrementing because of a server error like a 500 in 11:55:58 # status_forcelist and the given method is in the allowed_methods 11:55:58 cause = ResponseError.GENERIC_ERROR 11:55:58 if response and response.status: 11:55:58 if status_count is not None: 11:55:58 status_count -= 1 11:55:58 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:58 status = response.status 11:55:58 11:55:58 history = self.history + ( 11:55:58 RequestHistory(method, url, error, status, redirect_location), 11:55:58 ) 11:55:58 11:55:58 new_retry = self.new( 11:55:58 total=total, 11:55:58 connect=connect, 11:55:58 read=read, 11:55:58 redirect=redirect, 11:55:58 status=status_count, 11:55:58 other=other, 11:55:58 history=history, 11:55:58 ) 11:55:58 11:55:58 if new_retry.is_exhausted(): 11:55:58 reason = error or ResponseError(cause) 11:55:58 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:58 11:55:58 During handling of the above exception, another exception occurred: 11:55:58 11:55:58 self = 11:55:58 11:55:58 def test_04_check_tapi_topos(self): 11:55:58 self.tapi_topo["topology-id"] = test_utils.T100GE_UUID 11:55:58 > response = test_utils.transportpce_api_rpc_request( 11:55:58 'tapi-topology', 'get-topology-details', self.tapi_topo) 11:55:58 11:55:58 transportpce_tests/tapi/test01_abstracted_topology.py:210: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:58 response = post_request(url, data) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 transportpce_tests/common/test_utils.py:142: in post_request 11:55:58 return requests.request( 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:58 return session.request(method=method, url=url, **kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:58 resp = self.send(prep, **send_kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:58 r = adapter.send(request, **kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = 11:55:58 request = , stream = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:58 proxies = OrderedDict() 11:55:58 11:55:58 def send( 11:55:58 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:58 ): 11:55:58 """Sends PreparedRequest object. Returns Response object. 11:55:58 11:55:58 :param request: The :class:`PreparedRequest ` being sent. 11:55:58 :param stream: (optional) Whether to stream the request content. 11:55:58 :param timeout: (optional) How long to wait for the server to send 11:55:58 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:58 read timeout) ` tuple. 11:55:58 :type timeout: float or tuple or urllib3 Timeout object 11:55:58 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:58 we verify the server's TLS certificate, or a string, in which case it 11:55:58 must be a path to a CA bundle to use 11:55:58 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:58 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:58 :rtype: requests.Response 11:55:58 """ 11:55:58 11:55:58 try: 11:55:58 conn = self.get_connection_with_tls_context( 11:55:58 request, verify, proxies=proxies, cert=cert 11:55:58 ) 11:55:58 except LocationValueError as e: 11:55:58 raise InvalidURL(e, request=request) 11:55:58 11:55:58 self.cert_verify(conn, request.url, verify, cert) 11:55:58 url = self.request_url(request, proxies) 11:55:58 self.add_headers( 11:55:58 request, 11:55:58 stream=stream, 11:55:58 timeout=timeout, 11:55:58 verify=verify, 11:55:58 cert=cert, 11:55:58 proxies=proxies, 11:55:58 ) 11:55:58 11:55:58 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:58 11:55:58 if isinstance(timeout, tuple): 11:55:58 try: 11:55:58 connect, read = timeout 11:55:58 timeout = TimeoutSauce(connect=connect, read=read) 11:55:58 except ValueError: 11:55:58 raise ValueError( 11:55:58 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:58 f"or a single float to set both timeouts to the same value." 11:55:58 ) 11:55:58 elif isinstance(timeout, TimeoutSauce): 11:55:58 pass 11:55:58 else: 11:55:58 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:58 11:55:58 try: 11:55:58 resp = conn.urlopen( 11:55:58 method=request.method, 11:55:58 url=url, 11:55:58 body=request.body, 11:55:58 headers=request.headers, 11:55:58 redirect=False, 11:55:58 assert_same_host=False, 11:55:58 preload_content=False, 11:55:58 decode_content=False, 11:55:58 retries=self.max_retries, 11:55:58 timeout=timeout, 11:55:58 chunked=chunked, 11:55:58 ) 11:55:58 11:55:58 except (ProtocolError, OSError) as err: 11:55:58 raise ConnectionError(err, request=request) 11:55:58 11:55:58 except MaxRetryError as e: 11:55:58 if isinstance(e.reason, ConnectTimeoutError): 11:55:58 # TODO: Remove this in 3.0.0: see #2811 11:55:58 if not isinstance(e.reason, NewConnectionError): 11:55:58 raise ConnectTimeout(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, ResponseError): 11:55:58 raise RetryError(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, _ProxyError): 11:55:58 raise ProxyError(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, _SSLError): 11:55:58 # This branch is for urllib3 v1.22 and later. 11:55:58 raise SSLError(e, request=request) 11:55:58 11:55:58 > raise ConnectionError(e, request=request) 11:55:58 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:58 ----------------------------- Captured stdout call ----------------------------- 11:55:58 execution of test_04_check_tapi_topos 11:55:58 ________________ TransportTapitesting.test_05_disconnect_roadmb ________________ 11:55:58 11:55:58 self = 11:55:58 11:55:58 def _new_conn(self) -> socket.socket: 11:55:58 """Establish a socket connection and set nodelay settings on it. 11:55:58 11:55:58 :return: New socket connection. 11:55:58 """ 11:55:58 try: 11:55:58 > sock = connection.create_connection( 11:55:58 (self._dns_host, self.port), 11:55:58 self.timeout, 11:55:58 source_address=self.source_address, 11:55:58 socket_options=self.socket_options, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:58 raise err 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:58 socket_options = [(6, 1, 1)] 11:55:58 11:55:58 def create_connection( 11:55:58 address: tuple[str, int], 11:55:58 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:58 source_address: tuple[str, int] | None = None, 11:55:58 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:58 ) -> socket.socket: 11:55:58 """Connect to *address* and return the socket object. 11:55:58 11:55:58 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:58 port)``) and return the socket object. Passing the optional 11:55:58 *timeout* parameter will set the timeout on the socket instance 11:55:58 before attempting to connect. If no *timeout* is supplied, the 11:55:58 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:58 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:58 for the socket to bind as a source address before making the connection. 11:55:58 An host of '' or port 0 tells the OS to use the default. 11:55:58 """ 11:55:58 11:55:58 host, port = address 11:55:58 if host.startswith("["): 11:55:58 host = host.strip("[]") 11:55:58 err = None 11:55:58 11:55:58 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:58 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:58 # The original create_connection function always returns all records. 11:55:58 family = allowed_gai_family() 11:55:58 11:55:58 try: 11:55:58 host.encode("idna") 11:55:58 except UnicodeError: 11:55:58 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:58 11:55:58 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:58 af, socktype, proto, canonname, sa = res 11:55:58 sock = None 11:55:58 try: 11:55:58 sock = socket.socket(af, socktype, proto) 11:55:58 11:55:58 # If provided, set socket level options before connecting. 11:55:58 _set_socket_options(sock, socket_options) 11:55:58 11:55:58 if timeout is not _DEFAULT_TIMEOUT: 11:55:58 sock.settimeout(timeout) 11:55:58 if source_address: 11:55:58 sock.bind(source_address) 11:55:58 > sock.connect(sa) 11:55:58 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:58 11:55:58 The above exception was the direct cause of the following exception: 11:55:58 11:55:58 self = 11:55:58 method = 'DELETE' 11:55:58 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-B1' 11:55:58 body = None 11:55:58 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:58 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:58 redirect = False, assert_same_host = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:58 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:58 decode_content = False, response_kw = {} 11:55:58 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-B1', query=None, fragment=None) 11:55:58 destination_scheme = None, conn = None, release_this_conn = True 11:55:58 http_tunnel_required = False, err = None, clean_exit = False 11:55:58 11:55:58 def urlopen( # type: ignore[override] 11:55:58 self, 11:55:58 method: str, 11:55:58 url: str, 11:55:58 body: _TYPE_BODY | None = None, 11:55:58 headers: typing.Mapping[str, str] | None = None, 11:55:58 retries: Retry | bool | int | None = None, 11:55:58 redirect: bool = True, 11:55:58 assert_same_host: bool = True, 11:55:58 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:58 pool_timeout: int | None = None, 11:55:58 release_conn: bool | None = None, 11:55:58 chunked: bool = False, 11:55:58 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:58 preload_content: bool = True, 11:55:58 decode_content: bool = True, 11:55:58 **response_kw: typing.Any, 11:55:58 ) -> BaseHTTPResponse: 11:55:58 """ 11:55:58 Get a connection from the pool and perform an HTTP request. This is the 11:55:58 lowest level call for making a request, so you'll need to specify all 11:55:58 the raw details. 11:55:58 11:55:58 .. note:: 11:55:58 11:55:58 More commonly, it's appropriate to use a convenience method 11:55:58 such as :meth:`request`. 11:55:58 11:55:58 .. note:: 11:55:58 11:55:58 `release_conn` will only behave as expected if 11:55:58 `preload_content=False` because we want to make 11:55:58 `preload_content=False` the default behaviour someday soon without 11:55:58 breaking backwards compatibility. 11:55:58 11:55:58 :param method: 11:55:58 HTTP request method (such as GET, POST, PUT, etc.) 11:55:58 11:55:58 :param url: 11:55:58 The URL to perform the request on. 11:55:58 11:55:58 :param body: 11:55:58 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:58 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:58 11:55:58 :param headers: 11:55:58 Dictionary of custom headers to send, such as User-Agent, 11:55:58 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:58 these headers completely replace any pool-specific headers. 11:55:58 11:55:58 :param retries: 11:55:58 Configure the number of retries to allow before raising a 11:55:58 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:58 11:55:58 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:58 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:58 over different types of retries. 11:55:58 Pass an integer number to retry connection errors that many times, 11:55:58 but no other types of errors. Pass zero to never retry. 11:55:58 11:55:58 If ``False``, then retries are disabled and any exception is raised 11:55:58 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:58 the redirect response will be returned. 11:55:58 11:55:58 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:58 11:55:58 :param redirect: 11:55:58 If True, automatically handle redirects (status codes 301, 302, 11:55:58 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:58 will disable redirect, too. 11:55:58 11:55:58 :param assert_same_host: 11:55:58 If ``True``, will make sure that the host of the pool requests is 11:55:58 consistent else will raise HostChangedError. When ``False``, you can 11:55:58 use the pool on an HTTP proxy and request foreign hosts. 11:55:58 11:55:58 :param timeout: 11:55:58 If specified, overrides the default timeout for this one 11:55:58 request. It may be a float (in seconds) or an instance of 11:55:58 :class:`urllib3.util.Timeout`. 11:55:58 11:55:58 :param pool_timeout: 11:55:58 If set and the pool is set to block=True, then this method will 11:55:58 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:58 connection is available within the time period. 11:55:58 11:55:58 :param bool preload_content: 11:55:58 If True, the response's body will be preloaded into memory. 11:55:58 11:55:58 :param bool decode_content: 11:55:58 If True, will attempt to decode the body based on the 11:55:58 'content-encoding' header. 11:55:58 11:55:58 :param release_conn: 11:55:58 If False, then the urlopen call will not release the connection 11:55:58 back into the pool once a response is received (but will release if 11:55:58 you read the entire contents of the response such as when 11:55:58 `preload_content=True`). This is useful if you're not preloading 11:55:58 the response's content immediately. You will need to call 11:55:58 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:58 back into the pool. If None, it takes the value of ``preload_content`` 11:55:58 which defaults to ``True``. 11:55:58 11:55:58 :param bool chunked: 11:55:58 If True, urllib3 will send the body using chunked transfer 11:55:58 encoding. Otherwise, urllib3 will send the body using the standard 11:55:58 content-length form. Defaults to False. 11:55:58 11:55:58 :param int body_pos: 11:55:58 Position to seek to in file-like body in the event of a retry or 11:55:58 redirect. Typically this won't need to be set because urllib3 will 11:55:58 auto-populate the value when needed. 11:55:58 """ 11:55:58 parsed_url = parse_url(url) 11:55:58 destination_scheme = parsed_url.scheme 11:55:58 11:55:58 if headers is None: 11:55:58 headers = self.headers 11:55:58 11:55:58 if not isinstance(retries, Retry): 11:55:58 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:58 11:55:58 if release_conn is None: 11:55:58 release_conn = preload_content 11:55:58 11:55:58 # Check host 11:55:58 if assert_same_host and not self.is_same_host(url): 11:55:58 raise HostChangedError(self, url, retries) 11:55:58 11:55:58 # Ensure that the URL we're connecting to is properly encoded 11:55:58 if url.startswith("/"): 11:55:58 url = to_str(_encode_target(url)) 11:55:58 else: 11:55:58 url = to_str(parsed_url.url) 11:55:58 11:55:58 conn = None 11:55:58 11:55:58 # Track whether `conn` needs to be released before 11:55:58 # returning/raising/recursing. Update this variable if necessary, and 11:55:58 # leave `release_conn` constant throughout the function. That way, if 11:55:58 # the function recurses, the original value of `release_conn` will be 11:55:58 # passed down into the recursive call, and its value will be respected. 11:55:58 # 11:55:58 # See issue #651 [1] for details. 11:55:58 # 11:55:58 # [1] 11:55:58 release_this_conn = release_conn 11:55:58 11:55:58 http_tunnel_required = connection_requires_http_tunnel( 11:55:58 self.proxy, self.proxy_config, destination_scheme 11:55:58 ) 11:55:58 11:55:58 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:58 # have to copy the headers dict so we can safely change it without those 11:55:58 # changes being reflected in anyone else's copy. 11:55:58 if not http_tunnel_required: 11:55:58 headers = headers.copy() # type: ignore[attr-defined] 11:55:58 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:58 11:55:58 # Must keep the exception bound to a separate variable or else Python 3 11:55:58 # complains about UnboundLocalError. 11:55:58 err = None 11:55:58 11:55:58 # Keep track of whether we cleanly exited the except block. This 11:55:58 # ensures we do proper cleanup in finally. 11:55:58 clean_exit = False 11:55:58 11:55:58 # Rewind body position, if needed. Record current position 11:55:58 # for future rewinds in the event of a redirect/retry. 11:55:58 body_pos = set_file_position(body, body_pos) 11:55:58 11:55:58 try: 11:55:58 # Request a connection from the queue. 11:55:58 timeout_obj = self._get_timeout(timeout) 11:55:58 conn = self._get_conn(timeout=pool_timeout) 11:55:58 11:55:58 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:58 11:55:58 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:58 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:58 try: 11:55:58 self._prepare_proxy(conn) 11:55:58 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:58 self._raise_timeout( 11:55:58 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:58 ) 11:55:58 raise 11:55:58 11:55:58 # If we're going to release the connection in ``finally:``, then 11:55:58 # the response doesn't need to know about the connection. Otherwise 11:55:58 # it will also try to release it and we'll have a double-release 11:55:58 # mess. 11:55:58 response_conn = conn if not release_conn else None 11:55:58 11:55:58 # Make the request on the HTTPConnection object 11:55:58 > response = self._make_request( 11:55:58 conn, 11:55:58 method, 11:55:58 url, 11:55:58 timeout=timeout_obj, 11:55:58 body=body, 11:55:58 headers=headers, 11:55:58 chunked=chunked, 11:55:58 retries=retries, 11:55:58 response_conn=response_conn, 11:55:58 preload_content=preload_content, 11:55:58 decode_content=decode_content, 11:55:58 **response_kw, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:58 conn.request( 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:58 self.endheaders() 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:58 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:58 self.send(msg) 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:58 self.connect() 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:58 self.sock = self._new_conn() 11:55:58 ^^^^^^^^^^^^^^^^ 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = 11:55:58 11:55:58 def _new_conn(self) -> socket.socket: 11:55:58 """Establish a socket connection and set nodelay settings on it. 11:55:58 11:55:58 :return: New socket connection. 11:55:58 """ 11:55:58 try: 11:55:58 sock = connection.create_connection( 11:55:58 (self._dns_host, self.port), 11:55:58 self.timeout, 11:55:58 source_address=self.source_address, 11:55:58 socket_options=self.socket_options, 11:55:58 ) 11:55:58 except socket.gaierror as e: 11:55:58 raise NameResolutionError(self.host, self, e) from e 11:55:58 except SocketTimeout as e: 11:55:58 raise ConnectTimeoutError( 11:55:58 self, 11:55:58 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:58 ) from e 11:55:58 11:55:58 except OSError as e: 11:55:58 > raise NewConnectionError( 11:55:58 self, f"Failed to establish a new connection: {e}" 11:55:58 ) from e 11:55:58 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:58 11:55:58 The above exception was the direct cause of the following exception: 11:55:58 11:55:58 self = 11:55:58 request = , stream = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:58 proxies = OrderedDict() 11:55:58 11:55:58 def send( 11:55:58 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:58 ): 11:55:58 """Sends PreparedRequest object. Returns Response object. 11:55:58 11:55:58 :param request: The :class:`PreparedRequest ` being sent. 11:55:58 :param stream: (optional) Whether to stream the request content. 11:55:58 :param timeout: (optional) How long to wait for the server to send 11:55:58 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:58 read timeout) ` tuple. 11:55:58 :type timeout: float or tuple or urllib3 Timeout object 11:55:58 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:58 we verify the server's TLS certificate, or a string, in which case it 11:55:58 must be a path to a CA bundle to use 11:55:58 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:58 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:58 :rtype: requests.Response 11:55:58 """ 11:55:58 11:55:58 try: 11:55:58 conn = self.get_connection_with_tls_context( 11:55:58 request, verify, proxies=proxies, cert=cert 11:55:58 ) 11:55:58 except LocationValueError as e: 11:55:58 raise InvalidURL(e, request=request) 11:55:58 11:55:58 self.cert_verify(conn, request.url, verify, cert) 11:55:58 url = self.request_url(request, proxies) 11:55:58 self.add_headers( 11:55:58 request, 11:55:58 stream=stream, 11:55:58 timeout=timeout, 11:55:58 verify=verify, 11:55:58 cert=cert, 11:55:58 proxies=proxies, 11:55:58 ) 11:55:58 11:55:58 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:58 11:55:58 if isinstance(timeout, tuple): 11:55:58 try: 11:55:58 connect, read = timeout 11:55:58 timeout = TimeoutSauce(connect=connect, read=read) 11:55:58 except ValueError: 11:55:58 raise ValueError( 11:55:58 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:58 f"or a single float to set both timeouts to the same value." 11:55:58 ) 11:55:58 elif isinstance(timeout, TimeoutSauce): 11:55:58 pass 11:55:58 else: 11:55:58 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:58 11:55:58 try: 11:55:58 > resp = conn.urlopen( 11:55:58 method=request.method, 11:55:58 url=url, 11:55:58 body=request.body, 11:55:58 headers=request.headers, 11:55:58 redirect=False, 11:55:58 assert_same_host=False, 11:55:58 preload_content=False, 11:55:58 decode_content=False, 11:55:58 retries=self.max_retries, 11:55:58 timeout=timeout, 11:55:58 chunked=chunked, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:58 retries = retries.increment( 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:58 method = 'DELETE' 11:55:58 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-B1' 11:55:58 response = None 11:55:58 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:58 _pool = 11:55:58 _stacktrace = 11:55:58 11:55:58 def increment( 11:55:58 self, 11:55:58 method: str | None = None, 11:55:58 url: str | None = None, 11:55:58 response: BaseHTTPResponse | None = None, 11:55:58 error: Exception | None = None, 11:55:58 _pool: ConnectionPool | None = None, 11:55:58 _stacktrace: TracebackType | None = None, 11:55:58 ) -> Self: 11:55:58 """Return a new Retry object with incremented retry counters. 11:55:58 11:55:58 :param response: A response object, or None, if the server did not 11:55:58 return a response. 11:55:58 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:58 :param Exception error: An error encountered during the request, or 11:55:58 None if the response was received successfully. 11:55:58 11:55:58 :return: A new ``Retry`` object. 11:55:58 """ 11:55:58 if self.total is False and error: 11:55:58 # Disabled, indicate to re-raise the error. 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 11:55:58 total = self.total 11:55:58 if total is not None: 11:55:58 total -= 1 11:55:58 11:55:58 connect = self.connect 11:55:58 read = self.read 11:55:58 redirect = self.redirect 11:55:58 status_count = self.status 11:55:58 other = self.other 11:55:58 cause = "unknown" 11:55:58 status = None 11:55:58 redirect_location = None 11:55:58 11:55:58 if error and self._is_connection_error(error): 11:55:58 # Connect retry? 11:55:58 if connect is False: 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 elif connect is not None: 11:55:58 connect -= 1 11:55:58 11:55:58 elif error and self._is_read_error(error): 11:55:58 # Read retry? 11:55:58 if read is False or method is None or not self._is_method_retryable(method): 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 elif read is not None: 11:55:58 read -= 1 11:55:58 11:55:58 elif error: 11:55:58 # Other retry? 11:55:58 if other is not None: 11:55:58 other -= 1 11:55:58 11:55:58 elif response and response.get_redirect_location(): 11:55:58 # Redirect retry? 11:55:58 if redirect is not None: 11:55:58 redirect -= 1 11:55:58 cause = "too many redirects" 11:55:58 response_redirect_location = response.get_redirect_location() 11:55:58 if response_redirect_location: 11:55:58 redirect_location = response_redirect_location 11:55:58 status = response.status 11:55:58 11:55:58 else: 11:55:58 # Incrementing because of a server error like a 500 in 11:55:58 # status_forcelist and the given method is in the allowed_methods 11:55:58 cause = ResponseError.GENERIC_ERROR 11:55:58 if response and response.status: 11:55:58 if status_count is not None: 11:55:58 status_count -= 1 11:55:58 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:58 status = response.status 11:55:58 11:55:58 history = self.history + ( 11:55:58 RequestHistory(method, url, error, status, redirect_location), 11:55:58 ) 11:55:58 11:55:58 new_retry = self.new( 11:55:58 total=total, 11:55:58 connect=connect, 11:55:58 read=read, 11:55:58 redirect=redirect, 11:55:58 status=status_count, 11:55:58 other=other, 11:55:58 history=history, 11:55:58 ) 11:55:58 11:55:58 if new_retry.is_exhausted(): 11:55:58 reason = error or ResponseError(cause) 11:55:58 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-B1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:58 11:55:58 During handling of the above exception, another exception occurred: 11:55:58 11:55:58 self = 11:55:58 11:55:58 def test_05_disconnect_roadmb(self): 11:55:58 > response = test_utils.unmount_device("ROADM-B1") 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 11:55:58 transportpce_tests/tapi/test01_abstracted_topology.py:224: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 transportpce_tests/common/test_utils.py:379: in unmount_device 11:55:58 response = delete_request(url[RESTCONF_VERSION].format('{}', node)) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 transportpce_tests/common/test_utils.py:133: in delete_request 11:55:58 return requests.request( 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:58 return session.request(method=method, url=url, **kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:58 resp = self.send(prep, **send_kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:58 r = adapter.send(request, **kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = 11:55:58 request = , stream = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:58 proxies = OrderedDict() 11:55:58 11:55:58 def send( 11:55:58 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:58 ): 11:55:58 """Sends PreparedRequest object. Returns Response object. 11:55:58 11:55:58 :param request: The :class:`PreparedRequest ` being sent. 11:55:58 :param stream: (optional) Whether to stream the request content. 11:55:58 :param timeout: (optional) How long to wait for the server to send 11:55:58 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:58 read timeout) ` tuple. 11:55:58 :type timeout: float or tuple or urllib3 Timeout object 11:55:58 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:58 we verify the server's TLS certificate, or a string, in which case it 11:55:58 must be a path to a CA bundle to use 11:55:58 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:58 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:58 :rtype: requests.Response 11:55:58 """ 11:55:58 11:55:58 try: 11:55:58 conn = self.get_connection_with_tls_context( 11:55:58 request, verify, proxies=proxies, cert=cert 11:55:58 ) 11:55:58 except LocationValueError as e: 11:55:58 raise InvalidURL(e, request=request) 11:55:58 11:55:58 self.cert_verify(conn, request.url, verify, cert) 11:55:58 url = self.request_url(request, proxies) 11:55:58 self.add_headers( 11:55:58 request, 11:55:58 stream=stream, 11:55:58 timeout=timeout, 11:55:58 verify=verify, 11:55:58 cert=cert, 11:55:58 proxies=proxies, 11:55:58 ) 11:55:58 11:55:58 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:58 11:55:58 if isinstance(timeout, tuple): 11:55:58 try: 11:55:58 connect, read = timeout 11:55:58 timeout = TimeoutSauce(connect=connect, read=read) 11:55:58 except ValueError: 11:55:58 raise ValueError( 11:55:58 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:58 f"or a single float to set both timeouts to the same value." 11:55:58 ) 11:55:58 elif isinstance(timeout, TimeoutSauce): 11:55:58 pass 11:55:58 else: 11:55:58 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:58 11:55:58 try: 11:55:58 resp = conn.urlopen( 11:55:58 method=request.method, 11:55:58 url=url, 11:55:58 body=request.body, 11:55:58 headers=request.headers, 11:55:58 redirect=False, 11:55:58 assert_same_host=False, 11:55:58 preload_content=False, 11:55:58 decode_content=False, 11:55:58 retries=self.max_retries, 11:55:58 timeout=timeout, 11:55:58 chunked=chunked, 11:55:58 ) 11:55:58 11:55:58 except (ProtocolError, OSError) as err: 11:55:58 raise ConnectionError(err, request=request) 11:55:58 11:55:58 except MaxRetryError as e: 11:55:58 if isinstance(e.reason, ConnectTimeoutError): 11:55:58 # TODO: Remove this in 3.0.0: see #2811 11:55:58 if not isinstance(e.reason, NewConnectionError): 11:55:58 raise ConnectTimeout(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, ResponseError): 11:55:58 raise RetryError(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, _ProxyError): 11:55:58 raise ProxyError(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, _SSLError): 11:55:58 # This branch is for urllib3 v1.22 and later. 11:55:58 raise SSLError(e, request=request) 11:55:58 11:55:58 > raise ConnectionError(e, request=request) 11:55:58 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-B1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:58 ----------------------------- Captured stdout call ----------------------------- 11:55:58 execution of test_05_disconnect_roadmb 11:55:58 __________________ TransportTapitesting.test_06_connect_xpdra __________________ 11:55:58 11:55:58 self = 11:55:58 11:55:58 def _new_conn(self) -> socket.socket: 11:55:58 """Establish a socket connection and set nodelay settings on it. 11:55:58 11:55:58 :return: New socket connection. 11:55:58 """ 11:55:58 try: 11:55:58 > sock = connection.create_connection( 11:55:58 (self._dns_host, self.port), 11:55:58 self.timeout, 11:55:58 source_address=self.source_address, 11:55:58 socket_options=self.socket_options, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:58 raise err 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:58 socket_options = [(6, 1, 1)] 11:55:58 11:55:58 def create_connection( 11:55:58 address: tuple[str, int], 11:55:58 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:58 source_address: tuple[str, int] | None = None, 11:55:58 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:58 ) -> socket.socket: 11:55:58 """Connect to *address* and return the socket object. 11:55:58 11:55:58 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:58 port)``) and return the socket object. Passing the optional 11:55:58 *timeout* parameter will set the timeout on the socket instance 11:55:58 before attempting to connect. If no *timeout* is supplied, the 11:55:58 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:58 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:58 for the socket to bind as a source address before making the connection. 11:55:58 An host of '' or port 0 tells the OS to use the default. 11:55:58 """ 11:55:58 11:55:58 host, port = address 11:55:58 if host.startswith("["): 11:55:58 host = host.strip("[]") 11:55:58 err = None 11:55:58 11:55:58 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:58 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:58 # The original create_connection function always returns all records. 11:55:58 family = allowed_gai_family() 11:55:58 11:55:58 try: 11:55:58 host.encode("idna") 11:55:58 except UnicodeError: 11:55:58 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:58 11:55:58 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:58 af, socktype, proto, canonname, sa = res 11:55:58 sock = None 11:55:58 try: 11:55:58 sock = socket.socket(af, socktype, proto) 11:55:58 11:55:58 # If provided, set socket level options before connecting. 11:55:58 _set_socket_options(sock, socket_options) 11:55:58 11:55:58 if timeout is not _DEFAULT_TIMEOUT: 11:55:58 sock.settimeout(timeout) 11:55:58 if source_address: 11:55:58 sock.bind(source_address) 11:55:58 > sock.connect(sa) 11:55:58 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:58 11:55:58 The above exception was the direct cause of the following exception: 11:55:58 11:55:58 self = 11:55:58 method = 'PUT' 11:55:58 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-A1' 11:55:58 body = '{"node": [{"node-id": "XPDR-A1", "netconf-node-topology:netconf-node": {"netconf-node-topology:host": "127.0.0.1", "n...ff-millis": 1800000, "netconf-node-topology:backoff-multiplier": 1.5, "netconf-node-topology:keepalive-delay": 120}}]}' 11:55:58 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '709', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:58 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:58 redirect = False, assert_same_host = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:58 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:58 decode_content = False, response_kw = {} 11:55:58 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-A1', query=None, fragment=None) 11:55:58 destination_scheme = None, conn = None, release_this_conn = True 11:55:58 http_tunnel_required = False, err = None, clean_exit = False 11:55:58 11:55:58 def urlopen( # type: ignore[override] 11:55:58 self, 11:55:58 method: str, 11:55:58 url: str, 11:55:58 body: _TYPE_BODY | None = None, 11:55:58 headers: typing.Mapping[str, str] | None = None, 11:55:58 retries: Retry | bool | int | None = None, 11:55:58 redirect: bool = True, 11:55:58 assert_same_host: bool = True, 11:55:58 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:58 pool_timeout: int | None = None, 11:55:58 release_conn: bool | None = None, 11:55:58 chunked: bool = False, 11:55:58 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:58 preload_content: bool = True, 11:55:58 decode_content: bool = True, 11:55:58 **response_kw: typing.Any, 11:55:58 ) -> BaseHTTPResponse: 11:55:58 """ 11:55:58 Get a connection from the pool and perform an HTTP request. This is the 11:55:58 lowest level call for making a request, so you'll need to specify all 11:55:58 the raw details. 11:55:58 11:55:58 .. note:: 11:55:58 11:55:58 More commonly, it's appropriate to use a convenience method 11:55:58 such as :meth:`request`. 11:55:58 11:55:58 .. note:: 11:55:58 11:55:58 `release_conn` will only behave as expected if 11:55:58 `preload_content=False` because we want to make 11:55:58 `preload_content=False` the default behaviour someday soon without 11:55:58 breaking backwards compatibility. 11:55:58 11:55:58 :param method: 11:55:58 HTTP request method (such as GET, POST, PUT, etc.) 11:55:58 11:55:58 :param url: 11:55:58 The URL to perform the request on. 11:55:58 11:55:58 :param body: 11:55:58 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:58 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:58 11:55:58 :param headers: 11:55:58 Dictionary of custom headers to send, such as User-Agent, 11:55:58 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:58 these headers completely replace any pool-specific headers. 11:55:58 11:55:58 :param retries: 11:55:58 Configure the number of retries to allow before raising a 11:55:58 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:58 11:55:58 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:58 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:58 over different types of retries. 11:55:58 Pass an integer number to retry connection errors that many times, 11:55:58 but no other types of errors. Pass zero to never retry. 11:55:58 11:55:58 If ``False``, then retries are disabled and any exception is raised 11:55:58 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:58 the redirect response will be returned. 11:55:58 11:55:58 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:58 11:55:58 :param redirect: 11:55:58 If True, automatically handle redirects (status codes 301, 302, 11:55:58 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:58 will disable redirect, too. 11:55:58 11:55:58 :param assert_same_host: 11:55:58 If ``True``, will make sure that the host of the pool requests is 11:55:58 consistent else will raise HostChangedError. When ``False``, you can 11:55:58 use the pool on an HTTP proxy and request foreign hosts. 11:55:58 11:55:58 :param timeout: 11:55:58 If specified, overrides the default timeout for this one 11:55:58 request. It may be a float (in seconds) or an instance of 11:55:58 :class:`urllib3.util.Timeout`. 11:55:58 11:55:58 :param pool_timeout: 11:55:58 If set and the pool is set to block=True, then this method will 11:55:58 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:58 connection is available within the time period. 11:55:58 11:55:58 :param bool preload_content: 11:55:58 If True, the response's body will be preloaded into memory. 11:55:58 11:55:58 :param bool decode_content: 11:55:58 If True, will attempt to decode the body based on the 11:55:58 'content-encoding' header. 11:55:58 11:55:58 :param release_conn: 11:55:58 If False, then the urlopen call will not release the connection 11:55:58 back into the pool once a response is received (but will release if 11:55:58 you read the entire contents of the response such as when 11:55:58 `preload_content=True`). This is useful if you're not preloading 11:55:58 the response's content immediately. You will need to call 11:55:58 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:58 back into the pool. If None, it takes the value of ``preload_content`` 11:55:58 which defaults to ``True``. 11:55:58 11:55:58 :param bool chunked: 11:55:58 If True, urllib3 will send the body using chunked transfer 11:55:58 encoding. Otherwise, urllib3 will send the body using the standard 11:55:58 content-length form. Defaults to False. 11:55:58 11:55:58 :param int body_pos: 11:55:58 Position to seek to in file-like body in the event of a retry or 11:55:58 redirect. Typically this won't need to be set because urllib3 will 11:55:58 auto-populate the value when needed. 11:55:58 """ 11:55:58 parsed_url = parse_url(url) 11:55:58 destination_scheme = parsed_url.scheme 11:55:58 11:55:58 if headers is None: 11:55:58 headers = self.headers 11:55:58 11:55:58 if not isinstance(retries, Retry): 11:55:58 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:58 11:55:58 if release_conn is None: 11:55:58 release_conn = preload_content 11:55:58 11:55:58 # Check host 11:55:58 if assert_same_host and not self.is_same_host(url): 11:55:58 raise HostChangedError(self, url, retries) 11:55:58 11:55:58 # Ensure that the URL we're connecting to is properly encoded 11:55:58 if url.startswith("/"): 11:55:58 url = to_str(_encode_target(url)) 11:55:58 else: 11:55:58 url = to_str(parsed_url.url) 11:55:58 11:55:58 conn = None 11:55:58 11:55:58 # Track whether `conn` needs to be released before 11:55:58 # returning/raising/recursing. Update this variable if necessary, and 11:55:58 # leave `release_conn` constant throughout the function. That way, if 11:55:58 # the function recurses, the original value of `release_conn` will be 11:55:58 # passed down into the recursive call, and its value will be respected. 11:55:58 # 11:55:58 # See issue #651 [1] for details. 11:55:58 # 11:55:58 # [1] 11:55:58 release_this_conn = release_conn 11:55:58 11:55:58 http_tunnel_required = connection_requires_http_tunnel( 11:55:58 self.proxy, self.proxy_config, destination_scheme 11:55:58 ) 11:55:58 11:55:58 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:58 # have to copy the headers dict so we can safely change it without those 11:55:58 # changes being reflected in anyone else's copy. 11:55:58 if not http_tunnel_required: 11:55:58 headers = headers.copy() # type: ignore[attr-defined] 11:55:58 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:58 11:55:58 # Must keep the exception bound to a separate variable or else Python 3 11:55:58 # complains about UnboundLocalError. 11:55:58 err = None 11:55:58 11:55:58 # Keep track of whether we cleanly exited the except block. This 11:55:58 # ensures we do proper cleanup in finally. 11:55:58 clean_exit = False 11:55:58 11:55:58 # Rewind body position, if needed. Record current position 11:55:58 # for future rewinds in the event of a redirect/retry. 11:55:58 body_pos = set_file_position(body, body_pos) 11:55:58 11:55:58 try: 11:55:58 # Request a connection from the queue. 11:55:58 timeout_obj = self._get_timeout(timeout) 11:55:58 conn = self._get_conn(timeout=pool_timeout) 11:55:58 11:55:58 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:58 11:55:58 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:58 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:58 try: 11:55:58 self._prepare_proxy(conn) 11:55:58 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:58 self._raise_timeout( 11:55:58 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:58 ) 11:55:58 raise 11:55:58 11:55:58 # If we're going to release the connection in ``finally:``, then 11:55:58 # the response doesn't need to know about the connection. Otherwise 11:55:58 # it will also try to release it and we'll have a double-release 11:55:58 # mess. 11:55:58 response_conn = conn if not release_conn else None 11:55:58 11:55:58 # Make the request on the HTTPConnection object 11:55:58 > response = self._make_request( 11:55:58 conn, 11:55:58 method, 11:55:58 url, 11:55:58 timeout=timeout_obj, 11:55:58 body=body, 11:55:58 headers=headers, 11:55:58 chunked=chunked, 11:55:58 retries=retries, 11:55:58 response_conn=response_conn, 11:55:58 preload_content=preload_content, 11:55:58 decode_content=decode_content, 11:55:58 **response_kw, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:58 conn.request( 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:58 self.endheaders() 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:58 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:58 self.send(msg) 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:58 self.connect() 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:58 self.sock = self._new_conn() 11:55:58 ^^^^^^^^^^^^^^^^ 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = 11:55:58 11:55:58 def _new_conn(self) -> socket.socket: 11:55:58 """Establish a socket connection and set nodelay settings on it. 11:55:58 11:55:58 :return: New socket connection. 11:55:58 """ 11:55:58 try: 11:55:58 sock = connection.create_connection( 11:55:58 (self._dns_host, self.port), 11:55:58 self.timeout, 11:55:58 source_address=self.source_address, 11:55:58 socket_options=self.socket_options, 11:55:58 ) 11:55:58 except socket.gaierror as e: 11:55:58 raise NameResolutionError(self.host, self, e) from e 11:55:58 except SocketTimeout as e: 11:55:58 raise ConnectTimeoutError( 11:55:58 self, 11:55:58 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:58 ) from e 11:55:58 11:55:58 except OSError as e: 11:55:58 > raise NewConnectionError( 11:55:58 self, f"Failed to establish a new connection: {e}" 11:55:58 ) from e 11:55:58 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:58 11:55:58 The above exception was the direct cause of the following exception: 11:55:58 11:55:58 self = 11:55:58 request = , stream = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:58 proxies = OrderedDict() 11:55:58 11:55:58 def send( 11:55:58 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:58 ): 11:55:58 """Sends PreparedRequest object. Returns Response object. 11:55:58 11:55:58 :param request: The :class:`PreparedRequest ` being sent. 11:55:58 :param stream: (optional) Whether to stream the request content. 11:55:58 :param timeout: (optional) How long to wait for the server to send 11:55:58 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:58 read timeout) ` tuple. 11:55:58 :type timeout: float or tuple or urllib3 Timeout object 11:55:58 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:58 we verify the server's TLS certificate, or a string, in which case it 11:55:58 must be a path to a CA bundle to use 11:55:58 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:58 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:58 :rtype: requests.Response 11:55:58 """ 11:55:58 11:55:58 try: 11:55:58 conn = self.get_connection_with_tls_context( 11:55:58 request, verify, proxies=proxies, cert=cert 11:55:58 ) 11:55:58 except LocationValueError as e: 11:55:58 raise InvalidURL(e, request=request) 11:55:58 11:55:58 self.cert_verify(conn, request.url, verify, cert) 11:55:58 url = self.request_url(request, proxies) 11:55:58 self.add_headers( 11:55:58 request, 11:55:58 stream=stream, 11:55:58 timeout=timeout, 11:55:58 verify=verify, 11:55:58 cert=cert, 11:55:58 proxies=proxies, 11:55:58 ) 11:55:58 11:55:58 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:58 11:55:58 if isinstance(timeout, tuple): 11:55:58 try: 11:55:58 connect, read = timeout 11:55:58 timeout = TimeoutSauce(connect=connect, read=read) 11:55:58 except ValueError: 11:55:58 raise ValueError( 11:55:58 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:58 f"or a single float to set both timeouts to the same value." 11:55:58 ) 11:55:58 elif isinstance(timeout, TimeoutSauce): 11:55:58 pass 11:55:58 else: 11:55:58 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:58 11:55:58 try: 11:55:58 > resp = conn.urlopen( 11:55:58 method=request.method, 11:55:58 url=url, 11:55:58 body=request.body, 11:55:58 headers=request.headers, 11:55:58 redirect=False, 11:55:58 assert_same_host=False, 11:55:58 preload_content=False, 11:55:58 decode_content=False, 11:55:58 retries=self.max_retries, 11:55:58 timeout=timeout, 11:55:58 chunked=chunked, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:58 retries = retries.increment( 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:58 method = 'PUT' 11:55:58 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-A1' 11:55:58 response = None 11:55:58 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:58 _pool = 11:55:58 _stacktrace = 11:55:58 11:55:58 def increment( 11:55:58 self, 11:55:58 method: str | None = None, 11:55:58 url: str | None = None, 11:55:58 response: BaseHTTPResponse | None = None, 11:55:58 error: Exception | None = None, 11:55:58 _pool: ConnectionPool | None = None, 11:55:58 _stacktrace: TracebackType | None = None, 11:55:58 ) -> Self: 11:55:58 """Return a new Retry object with incremented retry counters. 11:55:58 11:55:58 :param response: A response object, or None, if the server did not 11:55:58 return a response. 11:55:58 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:58 :param Exception error: An error encountered during the request, or 11:55:58 None if the response was received successfully. 11:55:58 11:55:58 :return: A new ``Retry`` object. 11:55:58 """ 11:55:58 if self.total is False and error: 11:55:58 # Disabled, indicate to re-raise the error. 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 11:55:58 total = self.total 11:55:58 if total is not None: 11:55:58 total -= 1 11:55:58 11:55:58 connect = self.connect 11:55:58 read = self.read 11:55:58 redirect = self.redirect 11:55:58 status_count = self.status 11:55:58 other = self.other 11:55:58 cause = "unknown" 11:55:58 status = None 11:55:58 redirect_location = None 11:55:58 11:55:58 if error and self._is_connection_error(error): 11:55:58 # Connect retry? 11:55:58 if connect is False: 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 elif connect is not None: 11:55:58 connect -= 1 11:55:58 11:55:58 elif error and self._is_read_error(error): 11:55:58 # Read retry? 11:55:58 if read is False or method is None or not self._is_method_retryable(method): 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 elif read is not None: 11:55:58 read -= 1 11:55:58 11:55:58 elif error: 11:55:58 # Other retry? 11:55:58 if other is not None: 11:55:58 other -= 1 11:55:58 11:55:58 elif response and response.get_redirect_location(): 11:55:58 # Redirect retry? 11:55:58 if redirect is not None: 11:55:58 redirect -= 1 11:55:58 cause = "too many redirects" 11:55:58 response_redirect_location = response.get_redirect_location() 11:55:58 if response_redirect_location: 11:55:58 redirect_location = response_redirect_location 11:55:58 status = response.status 11:55:58 11:55:58 else: 11:55:58 # Incrementing because of a server error like a 500 in 11:55:58 # status_forcelist and the given method is in the allowed_methods 11:55:58 cause = ResponseError.GENERIC_ERROR 11:55:58 if response and response.status: 11:55:58 if status_count is not None: 11:55:58 status_count -= 1 11:55:58 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:58 status = response.status 11:55:58 11:55:58 history = self.history + ( 11:55:58 RequestHistory(method, url, error, status, redirect_location), 11:55:58 ) 11:55:58 11:55:58 new_retry = self.new( 11:55:58 total=total, 11:55:58 connect=connect, 11:55:58 read=read, 11:55:58 redirect=redirect, 11:55:58 status=status_count, 11:55:58 other=other, 11:55:58 history=history, 11:55:58 ) 11:55:58 11:55:58 if new_retry.is_exhausted(): 11:55:58 reason = error or ResponseError(cause) 11:55:58 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-A1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:58 11:55:58 During handling of the above exception, another exception occurred: 11:55:58 11:55:58 self = 11:55:58 11:55:58 def test_06_connect_xpdra(self): 11:55:58 > response = test_utils.mount_device("XPDR-A1", ('xpdra', self.NODE_VERSION)) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 11:55:58 transportpce_tests/tapi/test01_abstracted_topology.py:228: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 transportpce_tests/common/test_utils.py:362: in mount_device 11:55:58 response = put_request(url[RESTCONF_VERSION].format('{}', node), body) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 transportpce_tests/common/test_utils.py:124: in put_request 11:55:58 return requests.request( 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:58 return session.request(method=method, url=url, **kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:58 resp = self.send(prep, **send_kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:58 r = adapter.send(request, **kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = 11:55:58 request = , stream = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:58 proxies = OrderedDict() 11:55:58 11:55:58 def send( 11:55:58 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:58 ): 11:55:58 """Sends PreparedRequest object. Returns Response object. 11:55:58 11:55:58 :param request: The :class:`PreparedRequest ` being sent. 11:55:58 :param stream: (optional) Whether to stream the request content. 11:55:58 :param timeout: (optional) How long to wait for the server to send 11:55:58 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:58 read timeout) ` tuple. 11:55:58 :type timeout: float or tuple or urllib3 Timeout object 11:55:58 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:58 we verify the server's TLS certificate, or a string, in which case it 11:55:58 must be a path to a CA bundle to use 11:55:58 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:58 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:58 :rtype: requests.Response 11:55:58 """ 11:55:58 11:55:58 try: 11:55:58 conn = self.get_connection_with_tls_context( 11:55:58 request, verify, proxies=proxies, cert=cert 11:55:58 ) 11:55:58 except LocationValueError as e: 11:55:58 raise InvalidURL(e, request=request) 11:55:58 11:55:58 self.cert_verify(conn, request.url, verify, cert) 11:55:58 url = self.request_url(request, proxies) 11:55:58 self.add_headers( 11:55:58 request, 11:55:58 stream=stream, 11:55:58 timeout=timeout, 11:55:58 verify=verify, 11:55:58 cert=cert, 11:55:58 proxies=proxies, 11:55:58 ) 11:55:58 11:55:58 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:58 11:55:58 if isinstance(timeout, tuple): 11:55:58 try: 11:55:58 connect, read = timeout 11:55:58 timeout = TimeoutSauce(connect=connect, read=read) 11:55:58 except ValueError: 11:55:58 raise ValueError( 11:55:58 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:58 f"or a single float to set both timeouts to the same value." 11:55:58 ) 11:55:58 elif isinstance(timeout, TimeoutSauce): 11:55:58 pass 11:55:58 else: 11:55:58 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:58 11:55:58 try: 11:55:58 resp = conn.urlopen( 11:55:58 method=request.method, 11:55:58 url=url, 11:55:58 body=request.body, 11:55:58 headers=request.headers, 11:55:58 redirect=False, 11:55:58 assert_same_host=False, 11:55:58 preload_content=False, 11:55:58 decode_content=False, 11:55:58 retries=self.max_retries, 11:55:58 timeout=timeout, 11:55:58 chunked=chunked, 11:55:58 ) 11:55:58 11:55:58 except (ProtocolError, OSError) as err: 11:55:58 raise ConnectionError(err, request=request) 11:55:58 11:55:58 except MaxRetryError as e: 11:55:58 if isinstance(e.reason, ConnectTimeoutError): 11:55:58 # TODO: Remove this in 3.0.0: see #2811 11:55:58 if not isinstance(e.reason, NewConnectionError): 11:55:58 raise ConnectTimeout(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, ResponseError): 11:55:58 raise RetryError(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, _ProxyError): 11:55:58 raise ProxyError(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, _SSLError): 11:55:58 # This branch is for urllib3 v1.22 and later. 11:55:58 raise SSLError(e, request=request) 11:55:58 11:55:58 > raise ConnectionError(e, request=request) 11:55:58 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-A1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:58 ----------------------------- Captured stdout call ----------------------------- 11:55:58 execution of test_06_connect_xpdra 11:55:58 ________________ TransportTapitesting.test_07_check_tapi_topos _________________ 11:55:58 11:55:58 self = 11:55:58 11:55:58 def _new_conn(self) -> socket.socket: 11:55:58 """Establish a socket connection and set nodelay settings on it. 11:55:58 11:55:58 :return: New socket connection. 11:55:58 """ 11:55:58 try: 11:55:58 > sock = connection.create_connection( 11:55:58 (self._dns_host, self.port), 11:55:58 self.timeout, 11:55:58 source_address=self.source_address, 11:55:58 socket_options=self.socket_options, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:58 raise err 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:58 socket_options = [(6, 1, 1)] 11:55:58 11:55:58 def create_connection( 11:55:58 address: tuple[str, int], 11:55:58 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:58 source_address: tuple[str, int] | None = None, 11:55:58 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:58 ) -> socket.socket: 11:55:58 """Connect to *address* and return the socket object. 11:55:58 11:55:58 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:58 port)``) and return the socket object. Passing the optional 11:55:58 *timeout* parameter will set the timeout on the socket instance 11:55:58 before attempting to connect. If no *timeout* is supplied, the 11:55:58 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:58 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:58 for the socket to bind as a source address before making the connection. 11:55:58 An host of '' or port 0 tells the OS to use the default. 11:55:58 """ 11:55:58 11:55:58 host, port = address 11:55:58 if host.startswith("["): 11:55:58 host = host.strip("[]") 11:55:58 err = None 11:55:58 11:55:58 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:58 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:58 # The original create_connection function always returns all records. 11:55:58 family = allowed_gai_family() 11:55:58 11:55:58 try: 11:55:58 host.encode("idna") 11:55:58 except UnicodeError: 11:55:58 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:58 11:55:58 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:58 af, socktype, proto, canonname, sa = res 11:55:58 sock = None 11:55:58 try: 11:55:58 sock = socket.socket(af, socktype, proto) 11:55:58 11:55:58 # If provided, set socket level options before connecting. 11:55:58 _set_socket_options(sock, socket_options) 11:55:58 11:55:58 if timeout is not _DEFAULT_TIMEOUT: 11:55:58 sock.settimeout(timeout) 11:55:58 if source_address: 11:55:58 sock.bind(source_address) 11:55:58 > sock.connect(sa) 11:55:58 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:58 11:55:58 The above exception was the direct cause of the following exception: 11:55:58 11:55:58 self = 11:55:58 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:58 body = '{"input": {"topology-id": "747c670e-7a07-3dab-b379-5b1cd17402a3"}}' 11:55:58 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '66', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:58 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:58 redirect = False, assert_same_host = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:58 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:58 decode_content = False, response_kw = {} 11:55:58 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/tapi-topology:get-topology-details', query=None, fragment=None) 11:55:58 destination_scheme = None, conn = None, release_this_conn = True 11:55:58 http_tunnel_required = False, err = None, clean_exit = False 11:55:58 11:55:58 def urlopen( # type: ignore[override] 11:55:58 self, 11:55:58 method: str, 11:55:58 url: str, 11:55:58 body: _TYPE_BODY | None = None, 11:55:58 headers: typing.Mapping[str, str] | None = None, 11:55:58 retries: Retry | bool | int | None = None, 11:55:58 redirect: bool = True, 11:55:58 assert_same_host: bool = True, 11:55:58 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:58 pool_timeout: int | None = None, 11:55:58 release_conn: bool | None = None, 11:55:58 chunked: bool = False, 11:55:58 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:58 preload_content: bool = True, 11:55:58 decode_content: bool = True, 11:55:58 **response_kw: typing.Any, 11:55:58 ) -> BaseHTTPResponse: 11:55:58 """ 11:55:58 Get a connection from the pool and perform an HTTP request. This is the 11:55:58 lowest level call for making a request, so you'll need to specify all 11:55:58 the raw details. 11:55:58 11:55:58 .. note:: 11:55:58 11:55:58 More commonly, it's appropriate to use a convenience method 11:55:58 such as :meth:`request`. 11:55:58 11:55:58 .. note:: 11:55:58 11:55:58 `release_conn` will only behave as expected if 11:55:58 `preload_content=False` because we want to make 11:55:58 `preload_content=False` the default behaviour someday soon without 11:55:58 breaking backwards compatibility. 11:55:58 11:55:58 :param method: 11:55:58 HTTP request method (such as GET, POST, PUT, etc.) 11:55:58 11:55:58 :param url: 11:55:58 The URL to perform the request on. 11:55:58 11:55:58 :param body: 11:55:58 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:58 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:58 11:55:58 :param headers: 11:55:58 Dictionary of custom headers to send, such as User-Agent, 11:55:58 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:58 these headers completely replace any pool-specific headers. 11:55:58 11:55:58 :param retries: 11:55:58 Configure the number of retries to allow before raising a 11:55:58 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:58 11:55:58 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:58 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:58 over different types of retries. 11:55:58 Pass an integer number to retry connection errors that many times, 11:55:58 but no other types of errors. Pass zero to never retry. 11:55:58 11:55:58 If ``False``, then retries are disabled and any exception is raised 11:55:58 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:58 the redirect response will be returned. 11:55:58 11:55:58 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:58 11:55:58 :param redirect: 11:55:58 If True, automatically handle redirects (status codes 301, 302, 11:55:58 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:58 will disable redirect, too. 11:55:58 11:55:58 :param assert_same_host: 11:55:58 If ``True``, will make sure that the host of the pool requests is 11:55:58 consistent else will raise HostChangedError. When ``False``, you can 11:55:58 use the pool on an HTTP proxy and request foreign hosts. 11:55:58 11:55:58 :param timeout: 11:55:58 If specified, overrides the default timeout for this one 11:55:58 request. It may be a float (in seconds) or an instance of 11:55:58 :class:`urllib3.util.Timeout`. 11:55:58 11:55:58 :param pool_timeout: 11:55:58 If set and the pool is set to block=True, then this method will 11:55:58 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:58 connection is available within the time period. 11:55:58 11:55:58 :param bool preload_content: 11:55:58 If True, the response's body will be preloaded into memory. 11:55:58 11:55:58 :param bool decode_content: 11:55:58 If True, will attempt to decode the body based on the 11:55:58 'content-encoding' header. 11:55:58 11:55:58 :param release_conn: 11:55:58 If False, then the urlopen call will not release the connection 11:55:58 back into the pool once a response is received (but will release if 11:55:58 you read the entire contents of the response such as when 11:55:58 `preload_content=True`). This is useful if you're not preloading 11:55:58 the response's content immediately. You will need to call 11:55:58 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:58 back into the pool. If None, it takes the value of ``preload_content`` 11:55:58 which defaults to ``True``. 11:55:58 11:55:58 :param bool chunked: 11:55:58 If True, urllib3 will send the body using chunked transfer 11:55:58 encoding. Otherwise, urllib3 will send the body using the standard 11:55:58 content-length form. Defaults to False. 11:55:58 11:55:58 :param int body_pos: 11:55:58 Position to seek to in file-like body in the event of a retry or 11:55:58 redirect. Typically this won't need to be set because urllib3 will 11:55:58 auto-populate the value when needed. 11:55:58 """ 11:55:58 parsed_url = parse_url(url) 11:55:58 destination_scheme = parsed_url.scheme 11:55:58 11:55:58 if headers is None: 11:55:58 headers = self.headers 11:55:58 11:55:58 if not isinstance(retries, Retry): 11:55:58 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:58 11:55:58 if release_conn is None: 11:55:58 release_conn = preload_content 11:55:58 11:55:58 # Check host 11:55:58 if assert_same_host and not self.is_same_host(url): 11:55:58 raise HostChangedError(self, url, retries) 11:55:58 11:55:58 # Ensure that the URL we're connecting to is properly encoded 11:55:58 if url.startswith("/"): 11:55:58 url = to_str(_encode_target(url)) 11:55:58 else: 11:55:58 url = to_str(parsed_url.url) 11:55:58 11:55:58 conn = None 11:55:58 11:55:58 # Track whether `conn` needs to be released before 11:55:58 # returning/raising/recursing. Update this variable if necessary, and 11:55:58 # leave `release_conn` constant throughout the function. That way, if 11:55:58 # the function recurses, the original value of `release_conn` will be 11:55:58 # passed down into the recursive call, and its value will be respected. 11:55:58 # 11:55:58 # See issue #651 [1] for details. 11:55:58 # 11:55:58 # [1] 11:55:58 release_this_conn = release_conn 11:55:58 11:55:58 http_tunnel_required = connection_requires_http_tunnel( 11:55:58 self.proxy, self.proxy_config, destination_scheme 11:55:58 ) 11:55:58 11:55:58 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:58 # have to copy the headers dict so we can safely change it without those 11:55:58 # changes being reflected in anyone else's copy. 11:55:58 if not http_tunnel_required: 11:55:58 headers = headers.copy() # type: ignore[attr-defined] 11:55:58 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:58 11:55:58 # Must keep the exception bound to a separate variable or else Python 3 11:55:58 # complains about UnboundLocalError. 11:55:58 err = None 11:55:58 11:55:58 # Keep track of whether we cleanly exited the except block. This 11:55:58 # ensures we do proper cleanup in finally. 11:55:58 clean_exit = False 11:55:58 11:55:58 # Rewind body position, if needed. Record current position 11:55:58 # for future rewinds in the event of a redirect/retry. 11:55:58 body_pos = set_file_position(body, body_pos) 11:55:58 11:55:58 try: 11:55:58 # Request a connection from the queue. 11:55:58 timeout_obj = self._get_timeout(timeout) 11:55:58 conn = self._get_conn(timeout=pool_timeout) 11:55:58 11:55:58 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:58 11:55:58 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:58 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:58 try: 11:55:58 self._prepare_proxy(conn) 11:55:58 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:58 self._raise_timeout( 11:55:58 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:58 ) 11:55:58 raise 11:55:58 11:55:58 # If we're going to release the connection in ``finally:``, then 11:55:58 # the response doesn't need to know about the connection. Otherwise 11:55:58 # it will also try to release it and we'll have a double-release 11:55:58 # mess. 11:55:58 response_conn = conn if not release_conn else None 11:55:58 11:55:58 # Make the request on the HTTPConnection object 11:55:58 > response = self._make_request( 11:55:58 conn, 11:55:58 method, 11:55:58 url, 11:55:58 timeout=timeout_obj, 11:55:58 body=body, 11:55:58 headers=headers, 11:55:58 chunked=chunked, 11:55:58 retries=retries, 11:55:58 response_conn=response_conn, 11:55:58 preload_content=preload_content, 11:55:58 decode_content=decode_content, 11:55:58 **response_kw, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:58 conn.request( 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:58 self.endheaders() 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:58 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:58 self.send(msg) 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:58 self.connect() 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:58 self.sock = self._new_conn() 11:55:58 ^^^^^^^^^^^^^^^^ 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = 11:55:58 11:55:58 def _new_conn(self) -> socket.socket: 11:55:58 """Establish a socket connection and set nodelay settings on it. 11:55:58 11:55:58 :return: New socket connection. 11:55:58 """ 11:55:58 try: 11:55:58 sock = connection.create_connection( 11:55:58 (self._dns_host, self.port), 11:55:58 self.timeout, 11:55:58 source_address=self.source_address, 11:55:58 socket_options=self.socket_options, 11:55:58 ) 11:55:58 except socket.gaierror as e: 11:55:58 raise NameResolutionError(self.host, self, e) from e 11:55:58 except SocketTimeout as e: 11:55:58 raise ConnectTimeoutError( 11:55:58 self, 11:55:58 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:58 ) from e 11:55:58 11:55:58 except OSError as e: 11:55:58 > raise NewConnectionError( 11:55:58 self, f"Failed to establish a new connection: {e}" 11:55:58 ) from e 11:55:58 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:58 11:55:58 The above exception was the direct cause of the following exception: 11:55:58 11:55:58 self = 11:55:58 request = , stream = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:58 proxies = OrderedDict() 11:55:58 11:55:58 def send( 11:55:58 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:58 ): 11:55:58 """Sends PreparedRequest object. Returns Response object. 11:55:58 11:55:58 :param request: The :class:`PreparedRequest ` being sent. 11:55:58 :param stream: (optional) Whether to stream the request content. 11:55:58 :param timeout: (optional) How long to wait for the server to send 11:55:58 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:58 read timeout) ` tuple. 11:55:58 :type timeout: float or tuple or urllib3 Timeout object 11:55:58 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:58 we verify the server's TLS certificate, or a string, in which case it 11:55:58 must be a path to a CA bundle to use 11:55:58 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:58 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:58 :rtype: requests.Response 11:55:58 """ 11:55:58 11:55:58 try: 11:55:58 conn = self.get_connection_with_tls_context( 11:55:58 request, verify, proxies=proxies, cert=cert 11:55:58 ) 11:55:58 except LocationValueError as e: 11:55:58 raise InvalidURL(e, request=request) 11:55:58 11:55:58 self.cert_verify(conn, request.url, verify, cert) 11:55:58 url = self.request_url(request, proxies) 11:55:58 self.add_headers( 11:55:58 request, 11:55:58 stream=stream, 11:55:58 timeout=timeout, 11:55:58 verify=verify, 11:55:58 cert=cert, 11:55:58 proxies=proxies, 11:55:58 ) 11:55:58 11:55:58 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:58 11:55:58 if isinstance(timeout, tuple): 11:55:58 try: 11:55:58 connect, read = timeout 11:55:58 timeout = TimeoutSauce(connect=connect, read=read) 11:55:58 except ValueError: 11:55:58 raise ValueError( 11:55:58 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:58 f"or a single float to set both timeouts to the same value." 11:55:58 ) 11:55:58 elif isinstance(timeout, TimeoutSauce): 11:55:58 pass 11:55:58 else: 11:55:58 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:58 11:55:58 try: 11:55:58 > resp = conn.urlopen( 11:55:58 method=request.method, 11:55:58 url=url, 11:55:58 body=request.body, 11:55:58 headers=request.headers, 11:55:58 redirect=False, 11:55:58 assert_same_host=False, 11:55:58 preload_content=False, 11:55:58 decode_content=False, 11:55:58 retries=self.max_retries, 11:55:58 timeout=timeout, 11:55:58 chunked=chunked, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:58 retries = retries.increment( 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:58 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:58 response = None 11:55:58 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:58 _pool = 11:55:58 _stacktrace = 11:55:58 11:55:58 def increment( 11:55:58 self, 11:55:58 method: str | None = None, 11:55:58 url: str | None = None, 11:55:58 response: BaseHTTPResponse | None = None, 11:55:58 error: Exception | None = None, 11:55:58 _pool: ConnectionPool | None = None, 11:55:58 _stacktrace: TracebackType | None = None, 11:55:58 ) -> Self: 11:55:58 """Return a new Retry object with incremented retry counters. 11:55:58 11:55:58 :param response: A response object, or None, if the server did not 11:55:58 return a response. 11:55:58 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:58 :param Exception error: An error encountered during the request, or 11:55:58 None if the response was received successfully. 11:55:58 11:55:58 :return: A new ``Retry`` object. 11:55:58 """ 11:55:58 if self.total is False and error: 11:55:58 # Disabled, indicate to re-raise the error. 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 11:55:58 total = self.total 11:55:58 if total is not None: 11:55:58 total -= 1 11:55:58 11:55:58 connect = self.connect 11:55:58 read = self.read 11:55:58 redirect = self.redirect 11:55:58 status_count = self.status 11:55:58 other = self.other 11:55:58 cause = "unknown" 11:55:58 status = None 11:55:58 redirect_location = None 11:55:58 11:55:58 if error and self._is_connection_error(error): 11:55:58 # Connect retry? 11:55:58 if connect is False: 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 elif connect is not None: 11:55:58 connect -= 1 11:55:58 11:55:58 elif error and self._is_read_error(error): 11:55:58 # Read retry? 11:55:58 if read is False or method is None or not self._is_method_retryable(method): 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 elif read is not None: 11:55:58 read -= 1 11:55:58 11:55:58 elif error: 11:55:58 # Other retry? 11:55:58 if other is not None: 11:55:58 other -= 1 11:55:58 11:55:58 elif response and response.get_redirect_location(): 11:55:58 # Redirect retry? 11:55:58 if redirect is not None: 11:55:58 redirect -= 1 11:55:58 cause = "too many redirects" 11:55:58 response_redirect_location = response.get_redirect_location() 11:55:58 if response_redirect_location: 11:55:58 redirect_location = response_redirect_location 11:55:58 status = response.status 11:55:58 11:55:58 else: 11:55:58 # Incrementing because of a server error like a 500 in 11:55:58 # status_forcelist and the given method is in the allowed_methods 11:55:58 cause = ResponseError.GENERIC_ERROR 11:55:58 if response and response.status: 11:55:58 if status_count is not None: 11:55:58 status_count -= 1 11:55:58 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:58 status = response.status 11:55:58 11:55:58 history = self.history + ( 11:55:58 RequestHistory(method, url, error, status, redirect_location), 11:55:58 ) 11:55:58 11:55:58 new_retry = self.new( 11:55:58 total=total, 11:55:58 connect=connect, 11:55:58 read=read, 11:55:58 redirect=redirect, 11:55:58 status=status_count, 11:55:58 other=other, 11:55:58 history=history, 11:55:58 ) 11:55:58 11:55:58 if new_retry.is_exhausted(): 11:55:58 reason = error or ResponseError(cause) 11:55:58 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:58 11:55:58 During handling of the above exception, another exception occurred: 11:55:58 11:55:58 self = 11:55:58 11:55:58 def test_07_check_tapi_topos(self): 11:55:58 self.tapi_topo["topology-id"] = test_utils.T0_MULTILAYER_TOPO_UUID 11:55:58 > response = test_utils.transportpce_api_rpc_request( 11:55:58 'tapi-topology', 'get-topology-details', self.tapi_topo) 11:55:58 11:55:58 transportpce_tests/tapi/test01_abstracted_topology.py:233: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:58 response = post_request(url, data) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 transportpce_tests/common/test_utils.py:142: in post_request 11:55:58 return requests.request( 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:58 return session.request(method=method, url=url, **kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:58 resp = self.send(prep, **send_kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:58 r = adapter.send(request, **kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = 11:55:58 request = , stream = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:58 proxies = OrderedDict() 11:55:58 11:55:58 def send( 11:55:58 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:58 ): 11:55:58 """Sends PreparedRequest object. Returns Response object. 11:55:58 11:55:58 :param request: The :class:`PreparedRequest ` being sent. 11:55:58 :param stream: (optional) Whether to stream the request content. 11:55:58 :param timeout: (optional) How long to wait for the server to send 11:55:58 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:58 read timeout) ` tuple. 11:55:58 :type timeout: float or tuple or urllib3 Timeout object 11:55:58 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:58 we verify the server's TLS certificate, or a string, in which case it 11:55:58 must be a path to a CA bundle to use 11:55:58 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:58 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:58 :rtype: requests.Response 11:55:58 """ 11:55:58 11:55:58 try: 11:55:58 conn = self.get_connection_with_tls_context( 11:55:58 request, verify, proxies=proxies, cert=cert 11:55:58 ) 11:55:58 except LocationValueError as e: 11:55:58 raise InvalidURL(e, request=request) 11:55:58 11:55:58 self.cert_verify(conn, request.url, verify, cert) 11:55:58 url = self.request_url(request, proxies) 11:55:58 self.add_headers( 11:55:58 request, 11:55:58 stream=stream, 11:55:58 timeout=timeout, 11:55:58 verify=verify, 11:55:58 cert=cert, 11:55:58 proxies=proxies, 11:55:58 ) 11:55:58 11:55:58 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:58 11:55:58 if isinstance(timeout, tuple): 11:55:58 try: 11:55:58 connect, read = timeout 11:55:58 timeout = TimeoutSauce(connect=connect, read=read) 11:55:58 except ValueError: 11:55:58 raise ValueError( 11:55:58 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:58 f"or a single float to set both timeouts to the same value." 11:55:58 ) 11:55:58 elif isinstance(timeout, TimeoutSauce): 11:55:58 pass 11:55:58 else: 11:55:58 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:58 11:55:58 try: 11:55:58 resp = conn.urlopen( 11:55:58 method=request.method, 11:55:58 url=url, 11:55:58 body=request.body, 11:55:58 headers=request.headers, 11:55:58 redirect=False, 11:55:58 assert_same_host=False, 11:55:58 preload_content=False, 11:55:58 decode_content=False, 11:55:58 retries=self.max_retries, 11:55:58 timeout=timeout, 11:55:58 chunked=chunked, 11:55:58 ) 11:55:58 11:55:58 except (ProtocolError, OSError) as err: 11:55:58 raise ConnectionError(err, request=request) 11:55:58 11:55:58 except MaxRetryError as e: 11:55:58 if isinstance(e.reason, ConnectTimeoutError): 11:55:58 # TODO: Remove this in 3.0.0: see #2811 11:55:58 if not isinstance(e.reason, NewConnectionError): 11:55:58 raise ConnectTimeout(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, ResponseError): 11:55:58 raise RetryError(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, _ProxyError): 11:55:58 raise ProxyError(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, _SSLError): 11:55:58 # This branch is for urllib3 v1.22 and later. 11:55:58 raise SSLError(e, request=request) 11:55:58 11:55:58 > raise ConnectionError(e, request=request) 11:55:58 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:58 ----------------------------- Captured stdout call ----------------------------- 11:55:58 execution of test_07_check_tapi_topos 11:55:58 __________________ TransportTapitesting.test_08_connect_rdma ___________________ 11:55:58 11:55:58 self = 11:55:58 11:55:58 def _new_conn(self) -> socket.socket: 11:55:58 """Establish a socket connection and set nodelay settings on it. 11:55:58 11:55:58 :return: New socket connection. 11:55:58 """ 11:55:58 try: 11:55:58 > sock = connection.create_connection( 11:55:58 (self._dns_host, self.port), 11:55:58 self.timeout, 11:55:58 source_address=self.source_address, 11:55:58 socket_options=self.socket_options, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:58 raise err 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:58 socket_options = [(6, 1, 1)] 11:55:58 11:55:58 def create_connection( 11:55:58 address: tuple[str, int], 11:55:58 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:58 source_address: tuple[str, int] | None = None, 11:55:58 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:58 ) -> socket.socket: 11:55:58 """Connect to *address* and return the socket object. 11:55:58 11:55:58 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:58 port)``) and return the socket object. Passing the optional 11:55:58 *timeout* parameter will set the timeout on the socket instance 11:55:58 before attempting to connect. If no *timeout* is supplied, the 11:55:58 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:58 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:58 for the socket to bind as a source address before making the connection. 11:55:58 An host of '' or port 0 tells the OS to use the default. 11:55:58 """ 11:55:58 11:55:58 host, port = address 11:55:58 if host.startswith("["): 11:55:58 host = host.strip("[]") 11:55:58 err = None 11:55:58 11:55:58 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:58 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:58 # The original create_connection function always returns all records. 11:55:58 family = allowed_gai_family() 11:55:58 11:55:58 try: 11:55:58 host.encode("idna") 11:55:58 except UnicodeError: 11:55:58 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:58 11:55:58 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:58 af, socktype, proto, canonname, sa = res 11:55:58 sock = None 11:55:58 try: 11:55:58 sock = socket.socket(af, socktype, proto) 11:55:58 11:55:58 # If provided, set socket level options before connecting. 11:55:58 _set_socket_options(sock, socket_options) 11:55:58 11:55:58 if timeout is not _DEFAULT_TIMEOUT: 11:55:58 sock.settimeout(timeout) 11:55:58 if source_address: 11:55:58 sock.bind(source_address) 11:55:58 > sock.connect(sa) 11:55:58 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:58 11:55:58 The above exception was the direct cause of the following exception: 11:55:58 11:55:58 self = 11:55:58 method = 'PUT' 11:55:58 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-A1' 11:55:58 body = '{"node": [{"node-id": "ROADM-A1", "netconf-node-topology:netconf-node": {"netconf-node-topology:host": "127.0.0.1", "...ff-millis": 1800000, "netconf-node-topology:backoff-multiplier": 1.5, "netconf-node-topology:keepalive-delay": 120}}]}' 11:55:58 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '710', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:58 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:58 redirect = False, assert_same_host = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:58 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:58 decode_content = False, response_kw = {} 11:55:58 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-A1', query=None, fragment=None) 11:55:58 destination_scheme = None, conn = None, release_this_conn = True 11:55:58 http_tunnel_required = False, err = None, clean_exit = False 11:55:58 11:55:58 def urlopen( # type: ignore[override] 11:55:58 self, 11:55:58 method: str, 11:55:58 url: str, 11:55:58 body: _TYPE_BODY | None = None, 11:55:58 headers: typing.Mapping[str, str] | None = None, 11:55:58 retries: Retry | bool | int | None = None, 11:55:58 redirect: bool = True, 11:55:58 assert_same_host: bool = True, 11:55:58 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:58 pool_timeout: int | None = None, 11:55:58 release_conn: bool | None = None, 11:55:58 chunked: bool = False, 11:55:58 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:58 preload_content: bool = True, 11:55:58 decode_content: bool = True, 11:55:58 **response_kw: typing.Any, 11:55:58 ) -> BaseHTTPResponse: 11:55:58 """ 11:55:58 Get a connection from the pool and perform an HTTP request. This is the 11:55:58 lowest level call for making a request, so you'll need to specify all 11:55:58 the raw details. 11:55:58 11:55:58 .. note:: 11:55:58 11:55:58 More commonly, it's appropriate to use a convenience method 11:55:58 such as :meth:`request`. 11:55:58 11:55:58 .. note:: 11:55:58 11:55:58 `release_conn` will only behave as expected if 11:55:58 `preload_content=False` because we want to make 11:55:58 `preload_content=False` the default behaviour someday soon without 11:55:58 breaking backwards compatibility. 11:55:58 11:55:58 :param method: 11:55:58 HTTP request method (such as GET, POST, PUT, etc.) 11:55:58 11:55:58 :param url: 11:55:58 The URL to perform the request on. 11:55:58 11:55:58 :param body: 11:55:58 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:58 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:58 11:55:58 :param headers: 11:55:58 Dictionary of custom headers to send, such as User-Agent, 11:55:58 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:58 these headers completely replace any pool-specific headers. 11:55:58 11:55:58 :param retries: 11:55:58 Configure the number of retries to allow before raising a 11:55:58 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:58 11:55:58 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:58 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:58 over different types of retries. 11:55:58 Pass an integer number to retry connection errors that many times, 11:55:58 but no other types of errors. Pass zero to never retry. 11:55:58 11:55:58 If ``False``, then retries are disabled and any exception is raised 11:55:58 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:58 the redirect response will be returned. 11:55:58 11:55:58 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:58 11:55:58 :param redirect: 11:55:58 If True, automatically handle redirects (status codes 301, 302, 11:55:58 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:58 will disable redirect, too. 11:55:58 11:55:58 :param assert_same_host: 11:55:58 If ``True``, will make sure that the host of the pool requests is 11:55:58 consistent else will raise HostChangedError. When ``False``, you can 11:55:58 use the pool on an HTTP proxy and request foreign hosts. 11:55:58 11:55:58 :param timeout: 11:55:58 If specified, overrides the default timeout for this one 11:55:58 request. It may be a float (in seconds) or an instance of 11:55:58 :class:`urllib3.util.Timeout`. 11:55:58 11:55:58 :param pool_timeout: 11:55:58 If set and the pool is set to block=True, then this method will 11:55:58 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:58 connection is available within the time period. 11:55:58 11:55:58 :param bool preload_content: 11:55:58 If True, the response's body will be preloaded into memory. 11:55:58 11:55:58 :param bool decode_content: 11:55:58 If True, will attempt to decode the body based on the 11:55:58 'content-encoding' header. 11:55:58 11:55:58 :param release_conn: 11:55:58 If False, then the urlopen call will not release the connection 11:55:58 back into the pool once a response is received (but will release if 11:55:58 you read the entire contents of the response such as when 11:55:58 `preload_content=True`). This is useful if you're not preloading 11:55:58 the response's content immediately. You will need to call 11:55:58 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:58 back into the pool. If None, it takes the value of ``preload_content`` 11:55:58 which defaults to ``True``. 11:55:58 11:55:58 :param bool chunked: 11:55:58 If True, urllib3 will send the body using chunked transfer 11:55:58 encoding. Otherwise, urllib3 will send the body using the standard 11:55:58 content-length form. Defaults to False. 11:55:58 11:55:58 :param int body_pos: 11:55:58 Position to seek to in file-like body in the event of a retry or 11:55:58 redirect. Typically this won't need to be set because urllib3 will 11:55:58 auto-populate the value when needed. 11:55:58 """ 11:55:58 parsed_url = parse_url(url) 11:55:58 destination_scheme = parsed_url.scheme 11:55:58 11:55:58 if headers is None: 11:55:58 headers = self.headers 11:55:58 11:55:58 if not isinstance(retries, Retry): 11:55:58 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:58 11:55:58 if release_conn is None: 11:55:58 release_conn = preload_content 11:55:58 11:55:58 # Check host 11:55:58 if assert_same_host and not self.is_same_host(url): 11:55:58 raise HostChangedError(self, url, retries) 11:55:58 11:55:58 # Ensure that the URL we're connecting to is properly encoded 11:55:58 if url.startswith("/"): 11:55:58 url = to_str(_encode_target(url)) 11:55:58 else: 11:55:58 url = to_str(parsed_url.url) 11:55:58 11:55:58 conn = None 11:55:58 11:55:58 # Track whether `conn` needs to be released before 11:55:58 # returning/raising/recursing. Update this variable if necessary, and 11:55:58 # leave `release_conn` constant throughout the function. That way, if 11:55:58 # the function recurses, the original value of `release_conn` will be 11:55:58 # passed down into the recursive call, and its value will be respected. 11:55:58 # 11:55:58 # See issue #651 [1] for details. 11:55:58 # 11:55:58 # [1] 11:55:58 release_this_conn = release_conn 11:55:58 11:55:58 http_tunnel_required = connection_requires_http_tunnel( 11:55:58 self.proxy, self.proxy_config, destination_scheme 11:55:58 ) 11:55:58 11:55:58 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:58 # have to copy the headers dict so we can safely change it without those 11:55:58 # changes being reflected in anyone else's copy. 11:55:58 if not http_tunnel_required: 11:55:58 headers = headers.copy() # type: ignore[attr-defined] 11:55:58 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:58 11:55:58 # Must keep the exception bound to a separate variable or else Python 3 11:55:58 # complains about UnboundLocalError. 11:55:58 err = None 11:55:58 11:55:58 # Keep track of whether we cleanly exited the except block. This 11:55:58 # ensures we do proper cleanup in finally. 11:55:58 clean_exit = False 11:55:58 11:55:58 # Rewind body position, if needed. Record current position 11:55:58 # for future rewinds in the event of a redirect/retry. 11:55:58 body_pos = set_file_position(body, body_pos) 11:55:58 11:55:58 try: 11:55:58 # Request a connection from the queue. 11:55:58 timeout_obj = self._get_timeout(timeout) 11:55:58 conn = self._get_conn(timeout=pool_timeout) 11:55:58 11:55:58 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:58 11:55:58 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:58 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:58 try: 11:55:58 self._prepare_proxy(conn) 11:55:58 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:58 self._raise_timeout( 11:55:58 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:58 ) 11:55:58 raise 11:55:58 11:55:58 # If we're going to release the connection in ``finally:``, then 11:55:58 # the response doesn't need to know about the connection. Otherwise 11:55:58 # it will also try to release it and we'll have a double-release 11:55:58 # mess. 11:55:58 response_conn = conn if not release_conn else None 11:55:58 11:55:58 # Make the request on the HTTPConnection object 11:55:58 > response = self._make_request( 11:55:58 conn, 11:55:58 method, 11:55:58 url, 11:55:58 timeout=timeout_obj, 11:55:58 body=body, 11:55:58 headers=headers, 11:55:58 chunked=chunked, 11:55:58 retries=retries, 11:55:58 response_conn=response_conn, 11:55:58 preload_content=preload_content, 11:55:58 decode_content=decode_content, 11:55:58 **response_kw, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:58 conn.request( 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:58 self.endheaders() 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:58 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:58 self.send(msg) 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:58 self.connect() 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:58 self.sock = self._new_conn() 11:55:58 ^^^^^^^^^^^^^^^^ 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = 11:55:58 11:55:58 def _new_conn(self) -> socket.socket: 11:55:58 """Establish a socket connection and set nodelay settings on it. 11:55:58 11:55:58 :return: New socket connection. 11:55:58 """ 11:55:58 try: 11:55:58 sock = connection.create_connection( 11:55:58 (self._dns_host, self.port), 11:55:58 self.timeout, 11:55:58 source_address=self.source_address, 11:55:58 socket_options=self.socket_options, 11:55:58 ) 11:55:58 except socket.gaierror as e: 11:55:58 raise NameResolutionError(self.host, self, e) from e 11:55:58 except SocketTimeout as e: 11:55:58 raise ConnectTimeoutError( 11:55:58 self, 11:55:58 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:58 ) from e 11:55:58 11:55:58 except OSError as e: 11:55:58 > raise NewConnectionError( 11:55:58 self, f"Failed to establish a new connection: {e}" 11:55:58 ) from e 11:55:58 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:58 11:55:58 The above exception was the direct cause of the following exception: 11:55:58 11:55:58 self = 11:55:58 request = , stream = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:58 proxies = OrderedDict() 11:55:58 11:55:58 def send( 11:55:58 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:58 ): 11:55:58 """Sends PreparedRequest object. Returns Response object. 11:55:58 11:55:58 :param request: The :class:`PreparedRequest ` being sent. 11:55:58 :param stream: (optional) Whether to stream the request content. 11:55:58 :param timeout: (optional) How long to wait for the server to send 11:55:58 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:58 read timeout) ` tuple. 11:55:58 :type timeout: float or tuple or urllib3 Timeout object 11:55:58 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:58 we verify the server's TLS certificate, or a string, in which case it 11:55:58 must be a path to a CA bundle to use 11:55:58 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:58 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:58 :rtype: requests.Response 11:55:58 """ 11:55:58 11:55:58 try: 11:55:58 conn = self.get_connection_with_tls_context( 11:55:58 request, verify, proxies=proxies, cert=cert 11:55:58 ) 11:55:58 except LocationValueError as e: 11:55:58 raise InvalidURL(e, request=request) 11:55:58 11:55:58 self.cert_verify(conn, request.url, verify, cert) 11:55:58 url = self.request_url(request, proxies) 11:55:58 self.add_headers( 11:55:58 request, 11:55:58 stream=stream, 11:55:58 timeout=timeout, 11:55:58 verify=verify, 11:55:58 cert=cert, 11:55:58 proxies=proxies, 11:55:58 ) 11:55:58 11:55:58 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:58 11:55:58 if isinstance(timeout, tuple): 11:55:58 try: 11:55:58 connect, read = timeout 11:55:58 timeout = TimeoutSauce(connect=connect, read=read) 11:55:58 except ValueError: 11:55:58 raise ValueError( 11:55:58 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:58 f"or a single float to set both timeouts to the same value." 11:55:58 ) 11:55:58 elif isinstance(timeout, TimeoutSauce): 11:55:58 pass 11:55:58 else: 11:55:58 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:58 11:55:58 try: 11:55:58 > resp = conn.urlopen( 11:55:58 method=request.method, 11:55:58 url=url, 11:55:58 body=request.body, 11:55:58 headers=request.headers, 11:55:58 redirect=False, 11:55:58 assert_same_host=False, 11:55:58 preload_content=False, 11:55:58 decode_content=False, 11:55:58 retries=self.max_retries, 11:55:58 timeout=timeout, 11:55:58 chunked=chunked, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:58 retries = retries.increment( 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:58 method = 'PUT' 11:55:58 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-A1' 11:55:58 response = None 11:55:58 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:58 _pool = 11:55:58 _stacktrace = 11:55:58 11:55:58 def increment( 11:55:58 self, 11:55:58 method: str | None = None, 11:55:58 url: str | None = None, 11:55:58 response: BaseHTTPResponse | None = None, 11:55:58 error: Exception | None = None, 11:55:58 _pool: ConnectionPool | None = None, 11:55:58 _stacktrace: TracebackType | None = None, 11:55:58 ) -> Self: 11:55:58 """Return a new Retry object with incremented retry counters. 11:55:58 11:55:58 :param response: A response object, or None, if the server did not 11:55:58 return a response. 11:55:58 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:58 :param Exception error: An error encountered during the request, or 11:55:58 None if the response was received successfully. 11:55:58 11:55:58 :return: A new ``Retry`` object. 11:55:58 """ 11:55:58 if self.total is False and error: 11:55:58 # Disabled, indicate to re-raise the error. 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 11:55:58 total = self.total 11:55:58 if total is not None: 11:55:58 total -= 1 11:55:58 11:55:58 connect = self.connect 11:55:58 read = self.read 11:55:58 redirect = self.redirect 11:55:58 status_count = self.status 11:55:58 other = self.other 11:55:58 cause = "unknown" 11:55:58 status = None 11:55:58 redirect_location = None 11:55:58 11:55:58 if error and self._is_connection_error(error): 11:55:58 # Connect retry? 11:55:58 if connect is False: 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 elif connect is not None: 11:55:58 connect -= 1 11:55:58 11:55:58 elif error and self._is_read_error(error): 11:55:58 # Read retry? 11:55:58 if read is False or method is None or not self._is_method_retryable(method): 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 elif read is not None: 11:55:58 read -= 1 11:55:58 11:55:58 elif error: 11:55:58 # Other retry? 11:55:58 if other is not None: 11:55:58 other -= 1 11:55:58 11:55:58 elif response and response.get_redirect_location(): 11:55:58 # Redirect retry? 11:55:58 if redirect is not None: 11:55:58 redirect -= 1 11:55:58 cause = "too many redirects" 11:55:58 response_redirect_location = response.get_redirect_location() 11:55:58 if response_redirect_location: 11:55:58 redirect_location = response_redirect_location 11:55:58 status = response.status 11:55:58 11:55:58 else: 11:55:58 # Incrementing because of a server error like a 500 in 11:55:58 # status_forcelist and the given method is in the allowed_methods 11:55:58 cause = ResponseError.GENERIC_ERROR 11:55:58 if response and response.status: 11:55:58 if status_count is not None: 11:55:58 status_count -= 1 11:55:58 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:58 status = response.status 11:55:58 11:55:58 history = self.history + ( 11:55:58 RequestHistory(method, url, error, status, redirect_location), 11:55:58 ) 11:55:58 11:55:58 new_retry = self.new( 11:55:58 total=total, 11:55:58 connect=connect, 11:55:58 read=read, 11:55:58 redirect=redirect, 11:55:58 status=status_count, 11:55:58 other=other, 11:55:58 history=history, 11:55:58 ) 11:55:58 11:55:58 if new_retry.is_exhausted(): 11:55:58 reason = error or ResponseError(cause) 11:55:58 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-A1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:58 11:55:58 During handling of the above exception, another exception occurred: 11:55:58 11:55:58 self = 11:55:58 11:55:58 def test_08_connect_rdma(self): 11:55:58 > response = test_utils.mount_device("ROADM-A1", ('roadma', self.NODE_VERSION)) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 11:55:58 transportpce_tests/tapi/test01_abstracted_topology.py:240: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 transportpce_tests/common/test_utils.py:362: in mount_device 11:55:58 response = put_request(url[RESTCONF_VERSION].format('{}', node), body) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 transportpce_tests/common/test_utils.py:124: in put_request 11:55:58 return requests.request( 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:58 return session.request(method=method, url=url, **kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:58 resp = self.send(prep, **send_kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:58 r = adapter.send(request, **kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = 11:55:58 request = , stream = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:58 proxies = OrderedDict() 11:55:58 11:55:58 def send( 11:55:58 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:58 ): 11:55:58 """Sends PreparedRequest object. Returns Response object. 11:55:58 11:55:58 :param request: The :class:`PreparedRequest ` being sent. 11:55:58 :param stream: (optional) Whether to stream the request content. 11:55:58 :param timeout: (optional) How long to wait for the server to send 11:55:58 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:58 read timeout) ` tuple. 11:55:58 :type timeout: float or tuple or urllib3 Timeout object 11:55:58 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:58 we verify the server's TLS certificate, or a string, in which case it 11:55:58 must be a path to a CA bundle to use 11:55:58 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:58 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:58 :rtype: requests.Response 11:55:58 """ 11:55:58 11:55:58 try: 11:55:58 conn = self.get_connection_with_tls_context( 11:55:58 request, verify, proxies=proxies, cert=cert 11:55:58 ) 11:55:58 except LocationValueError as e: 11:55:58 raise InvalidURL(e, request=request) 11:55:58 11:55:58 self.cert_verify(conn, request.url, verify, cert) 11:55:58 url = self.request_url(request, proxies) 11:55:58 self.add_headers( 11:55:58 request, 11:55:58 stream=stream, 11:55:58 timeout=timeout, 11:55:58 verify=verify, 11:55:58 cert=cert, 11:55:58 proxies=proxies, 11:55:58 ) 11:55:58 11:55:58 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:58 11:55:58 if isinstance(timeout, tuple): 11:55:58 try: 11:55:58 connect, read = timeout 11:55:58 timeout = TimeoutSauce(connect=connect, read=read) 11:55:58 except ValueError: 11:55:58 raise ValueError( 11:55:58 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:58 f"or a single float to set both timeouts to the same value." 11:55:58 ) 11:55:58 elif isinstance(timeout, TimeoutSauce): 11:55:58 pass 11:55:58 else: 11:55:58 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:58 11:55:58 try: 11:55:58 resp = conn.urlopen( 11:55:58 method=request.method, 11:55:58 url=url, 11:55:58 body=request.body, 11:55:58 headers=request.headers, 11:55:58 redirect=False, 11:55:58 assert_same_host=False, 11:55:58 preload_content=False, 11:55:58 decode_content=False, 11:55:58 retries=self.max_retries, 11:55:58 timeout=timeout, 11:55:58 chunked=chunked, 11:55:58 ) 11:55:58 11:55:58 except (ProtocolError, OSError) as err: 11:55:58 raise ConnectionError(err, request=request) 11:55:58 11:55:58 except MaxRetryError as e: 11:55:58 if isinstance(e.reason, ConnectTimeoutError): 11:55:58 # TODO: Remove this in 3.0.0: see #2811 11:55:58 if not isinstance(e.reason, NewConnectionError): 11:55:58 raise ConnectTimeout(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, ResponseError): 11:55:58 raise RetryError(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, _ProxyError): 11:55:58 raise ProxyError(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, _SSLError): 11:55:58 # This branch is for urllib3 v1.22 and later. 11:55:58 raise SSLError(e, request=request) 11:55:58 11:55:58 > raise ConnectionError(e, request=request) 11:55:58 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-A1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:58 ----------------------------- Captured stdout call ----------------------------- 11:55:58 execution of test_08_connect_rdma 11:55:58 __________________ TransportTapitesting.test_09_connect_rdmc ___________________ 11:55:58 11:55:58 self = 11:55:58 11:55:58 def _new_conn(self) -> socket.socket: 11:55:58 """Establish a socket connection and set nodelay settings on it. 11:55:58 11:55:58 :return: New socket connection. 11:55:58 """ 11:55:58 try: 11:55:58 > sock = connection.create_connection( 11:55:58 (self._dns_host, self.port), 11:55:58 self.timeout, 11:55:58 source_address=self.source_address, 11:55:58 socket_options=self.socket_options, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:58 raise err 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:58 socket_options = [(6, 1, 1)] 11:55:58 11:55:58 def create_connection( 11:55:58 address: tuple[str, int], 11:55:58 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:58 source_address: tuple[str, int] | None = None, 11:55:58 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:58 ) -> socket.socket: 11:55:58 """Connect to *address* and return the socket object. 11:55:58 11:55:58 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:58 port)``) and return the socket object. Passing the optional 11:55:58 *timeout* parameter will set the timeout on the socket instance 11:55:58 before attempting to connect. If no *timeout* is supplied, the 11:55:58 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:58 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:58 for the socket to bind as a source address before making the connection. 11:55:58 An host of '' or port 0 tells the OS to use the default. 11:55:58 """ 11:55:58 11:55:58 host, port = address 11:55:58 if host.startswith("["): 11:55:58 host = host.strip("[]") 11:55:58 err = None 11:55:58 11:55:58 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:58 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:58 # The original create_connection function always returns all records. 11:55:58 family = allowed_gai_family() 11:55:58 11:55:58 try: 11:55:58 host.encode("idna") 11:55:58 except UnicodeError: 11:55:58 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:58 11:55:58 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:58 af, socktype, proto, canonname, sa = res 11:55:58 sock = None 11:55:58 try: 11:55:58 sock = socket.socket(af, socktype, proto) 11:55:58 11:55:58 # If provided, set socket level options before connecting. 11:55:58 _set_socket_options(sock, socket_options) 11:55:58 11:55:58 if timeout is not _DEFAULT_TIMEOUT: 11:55:58 sock.settimeout(timeout) 11:55:58 if source_address: 11:55:58 sock.bind(source_address) 11:55:58 > sock.connect(sa) 11:55:58 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:58 11:55:58 The above exception was the direct cause of the following exception: 11:55:58 11:55:58 self = 11:55:58 method = 'PUT' 11:55:58 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-C1' 11:55:58 body = '{"node": [{"node-id": "ROADM-C1", "netconf-node-topology:netconf-node": {"netconf-node-topology:host": "127.0.0.1", "...ff-millis": 1800000, "netconf-node-topology:backoff-multiplier": 1.5, "netconf-node-topology:keepalive-delay": 120}}]}' 11:55:58 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '710', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:58 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:58 redirect = False, assert_same_host = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:58 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:58 decode_content = False, response_kw = {} 11:55:58 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-C1', query=None, fragment=None) 11:55:58 destination_scheme = None, conn = None, release_this_conn = True 11:55:58 http_tunnel_required = False, err = None, clean_exit = False 11:55:58 11:55:58 def urlopen( # type: ignore[override] 11:55:58 self, 11:55:58 method: str, 11:55:58 url: str, 11:55:58 body: _TYPE_BODY | None = None, 11:55:58 headers: typing.Mapping[str, str] | None = None, 11:55:58 retries: Retry | bool | int | None = None, 11:55:58 redirect: bool = True, 11:55:58 assert_same_host: bool = True, 11:55:58 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:58 pool_timeout: int | None = None, 11:55:58 release_conn: bool | None = None, 11:55:58 chunked: bool = False, 11:55:58 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:58 preload_content: bool = True, 11:55:58 decode_content: bool = True, 11:55:58 **response_kw: typing.Any, 11:55:58 ) -> BaseHTTPResponse: 11:55:58 """ 11:55:58 Get a connection from the pool and perform an HTTP request. This is the 11:55:58 lowest level call for making a request, so you'll need to specify all 11:55:58 the raw details. 11:55:58 11:55:58 .. note:: 11:55:58 11:55:58 More commonly, it's appropriate to use a convenience method 11:55:58 such as :meth:`request`. 11:55:58 11:55:58 .. note:: 11:55:58 11:55:58 `release_conn` will only behave as expected if 11:55:58 `preload_content=False` because we want to make 11:55:58 `preload_content=False` the default behaviour someday soon without 11:55:58 breaking backwards compatibility. 11:55:58 11:55:58 :param method: 11:55:58 HTTP request method (such as GET, POST, PUT, etc.) 11:55:58 11:55:58 :param url: 11:55:58 The URL to perform the request on. 11:55:58 11:55:58 :param body: 11:55:58 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:58 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:58 11:55:58 :param headers: 11:55:58 Dictionary of custom headers to send, such as User-Agent, 11:55:58 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:58 these headers completely replace any pool-specific headers. 11:55:58 11:55:58 :param retries: 11:55:58 Configure the number of retries to allow before raising a 11:55:58 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:58 11:55:58 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:58 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:58 over different types of retries. 11:55:58 Pass an integer number to retry connection errors that many times, 11:55:58 but no other types of errors. Pass zero to never retry. 11:55:58 11:55:58 If ``False``, then retries are disabled and any exception is raised 11:55:58 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:58 the redirect response will be returned. 11:55:58 11:55:58 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:58 11:55:58 :param redirect: 11:55:58 If True, automatically handle redirects (status codes 301, 302, 11:55:58 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:58 will disable redirect, too. 11:55:58 11:55:58 :param assert_same_host: 11:55:58 If ``True``, will make sure that the host of the pool requests is 11:55:58 consistent else will raise HostChangedError. When ``False``, you can 11:55:58 use the pool on an HTTP proxy and request foreign hosts. 11:55:58 11:55:58 :param timeout: 11:55:58 If specified, overrides the default timeout for this one 11:55:58 request. It may be a float (in seconds) or an instance of 11:55:58 :class:`urllib3.util.Timeout`. 11:55:58 11:55:58 :param pool_timeout: 11:55:58 If set and the pool is set to block=True, then this method will 11:55:58 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:58 connection is available within the time period. 11:55:58 11:55:58 :param bool preload_content: 11:55:58 If True, the response's body will be preloaded into memory. 11:55:58 11:55:58 :param bool decode_content: 11:55:58 If True, will attempt to decode the body based on the 11:55:58 'content-encoding' header. 11:55:58 11:55:58 :param release_conn: 11:55:58 If False, then the urlopen call will not release the connection 11:55:58 back into the pool once a response is received (but will release if 11:55:58 you read the entire contents of the response such as when 11:55:58 `preload_content=True`). This is useful if you're not preloading 11:55:58 the response's content immediately. You will need to call 11:55:58 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:58 back into the pool. If None, it takes the value of ``preload_content`` 11:55:58 which defaults to ``True``. 11:55:58 11:55:58 :param bool chunked: 11:55:58 If True, urllib3 will send the body using chunked transfer 11:55:58 encoding. Otherwise, urllib3 will send the body using the standard 11:55:58 content-length form. Defaults to False. 11:55:58 11:55:58 :param int body_pos: 11:55:58 Position to seek to in file-like body in the event of a retry or 11:55:58 redirect. Typically this won't need to be set because urllib3 will 11:55:58 auto-populate the value when needed. 11:55:58 """ 11:55:58 parsed_url = parse_url(url) 11:55:58 destination_scheme = parsed_url.scheme 11:55:58 11:55:58 if headers is None: 11:55:58 headers = self.headers 11:55:58 11:55:58 if not isinstance(retries, Retry): 11:55:58 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:58 11:55:58 if release_conn is None: 11:55:58 release_conn = preload_content 11:55:58 11:55:58 # Check host 11:55:58 if assert_same_host and not self.is_same_host(url): 11:55:58 raise HostChangedError(self, url, retries) 11:55:58 11:55:58 # Ensure that the URL we're connecting to is properly encoded 11:55:58 if url.startswith("/"): 11:55:58 url = to_str(_encode_target(url)) 11:55:58 else: 11:55:58 url = to_str(parsed_url.url) 11:55:58 11:55:58 conn = None 11:55:58 11:55:58 # Track whether `conn` needs to be released before 11:55:58 # returning/raising/recursing. Update this variable if necessary, and 11:55:58 # leave `release_conn` constant throughout the function. That way, if 11:55:58 # the function recurses, the original value of `release_conn` will be 11:55:58 # passed down into the recursive call, and its value will be respected. 11:55:58 # 11:55:58 # See issue #651 [1] for details. 11:55:58 # 11:55:58 # [1] 11:55:58 release_this_conn = release_conn 11:55:58 11:55:58 http_tunnel_required = connection_requires_http_tunnel( 11:55:58 self.proxy, self.proxy_config, destination_scheme 11:55:58 ) 11:55:58 11:55:58 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:58 # have to copy the headers dict so we can safely change it without those 11:55:58 # changes being reflected in anyone else's copy. 11:55:58 if not http_tunnel_required: 11:55:58 headers = headers.copy() # type: ignore[attr-defined] 11:55:58 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:58 11:55:58 # Must keep the exception bound to a separate variable or else Python 3 11:55:58 # complains about UnboundLocalError. 11:55:58 err = None 11:55:58 11:55:58 # Keep track of whether we cleanly exited the except block. This 11:55:58 # ensures we do proper cleanup in finally. 11:55:58 clean_exit = False 11:55:58 11:55:58 # Rewind body position, if needed. Record current position 11:55:58 # for future rewinds in the event of a redirect/retry. 11:55:58 body_pos = set_file_position(body, body_pos) 11:55:58 11:55:58 try: 11:55:58 # Request a connection from the queue. 11:55:58 timeout_obj = self._get_timeout(timeout) 11:55:58 conn = self._get_conn(timeout=pool_timeout) 11:55:58 11:55:58 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:58 11:55:58 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:58 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:58 try: 11:55:58 self._prepare_proxy(conn) 11:55:58 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:58 self._raise_timeout( 11:55:58 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:58 ) 11:55:58 raise 11:55:58 11:55:58 # If we're going to release the connection in ``finally:``, then 11:55:58 # the response doesn't need to know about the connection. Otherwise 11:55:58 # it will also try to release it and we'll have a double-release 11:55:58 # mess. 11:55:58 response_conn = conn if not release_conn else None 11:55:58 11:55:58 # Make the request on the HTTPConnection object 11:55:58 > response = self._make_request( 11:55:58 conn, 11:55:58 method, 11:55:58 url, 11:55:58 timeout=timeout_obj, 11:55:58 body=body, 11:55:58 headers=headers, 11:55:58 chunked=chunked, 11:55:58 retries=retries, 11:55:58 response_conn=response_conn, 11:55:58 preload_content=preload_content, 11:55:58 decode_content=decode_content, 11:55:58 **response_kw, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:58 conn.request( 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:58 self.endheaders() 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:58 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:58 self.send(msg) 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:58 self.connect() 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:58 self.sock = self._new_conn() 11:55:58 ^^^^^^^^^^^^^^^^ 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = 11:55:58 11:55:58 def _new_conn(self) -> socket.socket: 11:55:58 """Establish a socket connection and set nodelay settings on it. 11:55:58 11:55:58 :return: New socket connection. 11:55:58 """ 11:55:58 try: 11:55:58 sock = connection.create_connection( 11:55:58 (self._dns_host, self.port), 11:55:58 self.timeout, 11:55:58 source_address=self.source_address, 11:55:58 socket_options=self.socket_options, 11:55:58 ) 11:55:58 except socket.gaierror as e: 11:55:58 raise NameResolutionError(self.host, self, e) from e 11:55:58 except SocketTimeout as e: 11:55:58 raise ConnectTimeoutError( 11:55:58 self, 11:55:58 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:58 ) from e 11:55:58 11:55:58 except OSError as e: 11:55:58 > raise NewConnectionError( 11:55:58 self, f"Failed to establish a new connection: {e}" 11:55:58 ) from e 11:55:58 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:58 11:55:58 The above exception was the direct cause of the following exception: 11:55:58 11:55:58 self = 11:55:58 request = , stream = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:58 proxies = OrderedDict() 11:55:58 11:55:58 def send( 11:55:58 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:58 ): 11:55:58 """Sends PreparedRequest object. Returns Response object. 11:55:58 11:55:58 :param request: The :class:`PreparedRequest ` being sent. 11:55:58 :param stream: (optional) Whether to stream the request content. 11:55:58 :param timeout: (optional) How long to wait for the server to send 11:55:58 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:58 read timeout) ` tuple. 11:55:58 :type timeout: float or tuple or urllib3 Timeout object 11:55:58 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:58 we verify the server's TLS certificate, or a string, in which case it 11:55:58 must be a path to a CA bundle to use 11:55:58 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:58 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:58 :rtype: requests.Response 11:55:58 """ 11:55:58 11:55:58 try: 11:55:58 conn = self.get_connection_with_tls_context( 11:55:58 request, verify, proxies=proxies, cert=cert 11:55:58 ) 11:55:58 except LocationValueError as e: 11:55:58 raise InvalidURL(e, request=request) 11:55:58 11:55:58 self.cert_verify(conn, request.url, verify, cert) 11:55:58 url = self.request_url(request, proxies) 11:55:58 self.add_headers( 11:55:58 request, 11:55:58 stream=stream, 11:55:58 timeout=timeout, 11:55:58 verify=verify, 11:55:58 cert=cert, 11:55:58 proxies=proxies, 11:55:58 ) 11:55:58 11:55:58 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:58 11:55:58 if isinstance(timeout, tuple): 11:55:58 try: 11:55:58 connect, read = timeout 11:55:58 timeout = TimeoutSauce(connect=connect, read=read) 11:55:58 except ValueError: 11:55:58 raise ValueError( 11:55:58 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:58 f"or a single float to set both timeouts to the same value." 11:55:58 ) 11:55:58 elif isinstance(timeout, TimeoutSauce): 11:55:58 pass 11:55:58 else: 11:55:58 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:58 11:55:58 try: 11:55:58 > resp = conn.urlopen( 11:55:58 method=request.method, 11:55:58 url=url, 11:55:58 body=request.body, 11:55:58 headers=request.headers, 11:55:58 redirect=False, 11:55:58 assert_same_host=False, 11:55:58 preload_content=False, 11:55:58 decode_content=False, 11:55:58 retries=self.max_retries, 11:55:58 timeout=timeout, 11:55:58 chunked=chunked, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:58 retries = retries.increment( 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:58 method = 'PUT' 11:55:58 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-C1' 11:55:58 response = None 11:55:58 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:58 _pool = 11:55:58 _stacktrace = 11:55:58 11:55:58 def increment( 11:55:58 self, 11:55:58 method: str | None = None, 11:55:58 url: str | None = None, 11:55:58 response: BaseHTTPResponse | None = None, 11:55:58 error: Exception | None = None, 11:55:58 _pool: ConnectionPool | None = None, 11:55:58 _stacktrace: TracebackType | None = None, 11:55:58 ) -> Self: 11:55:58 """Return a new Retry object with incremented retry counters. 11:55:58 11:55:58 :param response: A response object, or None, if the server did not 11:55:58 return a response. 11:55:58 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:58 :param Exception error: An error encountered during the request, or 11:55:58 None if the response was received successfully. 11:55:58 11:55:58 :return: A new ``Retry`` object. 11:55:58 """ 11:55:58 if self.total is False and error: 11:55:58 # Disabled, indicate to re-raise the error. 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 11:55:58 total = self.total 11:55:58 if total is not None: 11:55:58 total -= 1 11:55:58 11:55:58 connect = self.connect 11:55:58 read = self.read 11:55:58 redirect = self.redirect 11:55:58 status_count = self.status 11:55:58 other = self.other 11:55:58 cause = "unknown" 11:55:58 status = None 11:55:58 redirect_location = None 11:55:58 11:55:58 if error and self._is_connection_error(error): 11:55:58 # Connect retry? 11:55:58 if connect is False: 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 elif connect is not None: 11:55:58 connect -= 1 11:55:58 11:55:58 elif error and self._is_read_error(error): 11:55:58 # Read retry? 11:55:58 if read is False or method is None or not self._is_method_retryable(method): 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 elif read is not None: 11:55:58 read -= 1 11:55:58 11:55:58 elif error: 11:55:58 # Other retry? 11:55:58 if other is not None: 11:55:58 other -= 1 11:55:58 11:55:58 elif response and response.get_redirect_location(): 11:55:58 # Redirect retry? 11:55:58 if redirect is not None: 11:55:58 redirect -= 1 11:55:58 cause = "too many redirects" 11:55:58 response_redirect_location = response.get_redirect_location() 11:55:58 if response_redirect_location: 11:55:58 redirect_location = response_redirect_location 11:55:58 status = response.status 11:55:58 11:55:58 else: 11:55:58 # Incrementing because of a server error like a 500 in 11:55:58 # status_forcelist and the given method is in the allowed_methods 11:55:58 cause = ResponseError.GENERIC_ERROR 11:55:58 if response and response.status: 11:55:58 if status_count is not None: 11:55:58 status_count -= 1 11:55:58 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:58 status = response.status 11:55:58 11:55:58 history = self.history + ( 11:55:58 RequestHistory(method, url, error, status, redirect_location), 11:55:58 ) 11:55:58 11:55:58 new_retry = self.new( 11:55:58 total=total, 11:55:58 connect=connect, 11:55:58 read=read, 11:55:58 redirect=redirect, 11:55:58 status=status_count, 11:55:58 other=other, 11:55:58 history=history, 11:55:58 ) 11:55:58 11:55:58 if new_retry.is_exhausted(): 11:55:58 reason = error or ResponseError(cause) 11:55:58 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-C1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:58 11:55:58 During handling of the above exception, another exception occurred: 11:55:58 11:55:58 self = 11:55:58 11:55:58 def test_09_connect_rdmc(self): 11:55:58 > response = test_utils.mount_device("ROADM-C1", ('roadmc', self.NODE_VERSION)) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 11:55:58 transportpce_tests/tapi/test01_abstracted_topology.py:244: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 transportpce_tests/common/test_utils.py:362: in mount_device 11:55:58 response = put_request(url[RESTCONF_VERSION].format('{}', node), body) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 transportpce_tests/common/test_utils.py:124: in put_request 11:55:58 return requests.request( 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:58 return session.request(method=method, url=url, **kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:58 resp = self.send(prep, **send_kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:58 r = adapter.send(request, **kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = 11:55:58 request = , stream = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:58 proxies = OrderedDict() 11:55:58 11:55:58 def send( 11:55:58 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:58 ): 11:55:58 """Sends PreparedRequest object. Returns Response object. 11:55:58 11:55:58 :param request: The :class:`PreparedRequest ` being sent. 11:55:58 :param stream: (optional) Whether to stream the request content. 11:55:58 :param timeout: (optional) How long to wait for the server to send 11:55:58 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:58 read timeout) ` tuple. 11:55:58 :type timeout: float or tuple or urllib3 Timeout object 11:55:58 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:58 we verify the server's TLS certificate, or a string, in which case it 11:55:58 must be a path to a CA bundle to use 11:55:58 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:58 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:58 :rtype: requests.Response 11:55:58 """ 11:55:58 11:55:58 try: 11:55:58 conn = self.get_connection_with_tls_context( 11:55:58 request, verify, proxies=proxies, cert=cert 11:55:58 ) 11:55:58 except LocationValueError as e: 11:55:58 raise InvalidURL(e, request=request) 11:55:58 11:55:58 self.cert_verify(conn, request.url, verify, cert) 11:55:58 url = self.request_url(request, proxies) 11:55:58 self.add_headers( 11:55:58 request, 11:55:58 stream=stream, 11:55:58 timeout=timeout, 11:55:58 verify=verify, 11:55:58 cert=cert, 11:55:58 proxies=proxies, 11:55:58 ) 11:55:58 11:55:58 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:58 11:55:58 if isinstance(timeout, tuple): 11:55:58 try: 11:55:58 connect, read = timeout 11:55:58 timeout = TimeoutSauce(connect=connect, read=read) 11:55:58 except ValueError: 11:55:58 raise ValueError( 11:55:58 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:58 f"or a single float to set both timeouts to the same value." 11:55:58 ) 11:55:58 elif isinstance(timeout, TimeoutSauce): 11:55:58 pass 11:55:58 else: 11:55:58 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:58 11:55:58 try: 11:55:58 resp = conn.urlopen( 11:55:58 method=request.method, 11:55:58 url=url, 11:55:58 body=request.body, 11:55:58 headers=request.headers, 11:55:58 redirect=False, 11:55:58 assert_same_host=False, 11:55:58 preload_content=False, 11:55:58 decode_content=False, 11:55:58 retries=self.max_retries, 11:55:58 timeout=timeout, 11:55:58 chunked=chunked, 11:55:58 ) 11:55:58 11:55:58 except (ProtocolError, OSError) as err: 11:55:58 raise ConnectionError(err, request=request) 11:55:58 11:55:58 except MaxRetryError as e: 11:55:58 if isinstance(e.reason, ConnectTimeoutError): 11:55:58 # TODO: Remove this in 3.0.0: see #2811 11:55:58 if not isinstance(e.reason, NewConnectionError): 11:55:58 raise ConnectTimeout(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, ResponseError): 11:55:58 raise RetryError(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, _ProxyError): 11:55:58 raise ProxyError(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, _SSLError): 11:55:58 # This branch is for urllib3 v1.22 and later. 11:55:58 raise SSLError(e, request=request) 11:55:58 11:55:58 > raise ConnectionError(e, request=request) 11:55:58 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-C1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:58 ----------------------------- Captured stdout call ----------------------------- 11:55:58 execution of test_09_connect_rdmc 11:55:58 ________________ TransportTapitesting.test_10_check_tapi_topos _________________ 11:55:58 11:55:58 self = 11:55:58 11:55:58 def _new_conn(self) -> socket.socket: 11:55:58 """Establish a socket connection and set nodelay settings on it. 11:55:58 11:55:58 :return: New socket connection. 11:55:58 """ 11:55:58 try: 11:55:58 > sock = connection.create_connection( 11:55:58 (self._dns_host, self.port), 11:55:58 self.timeout, 11:55:58 source_address=self.source_address, 11:55:58 socket_options=self.socket_options, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:58 raise err 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:58 socket_options = [(6, 1, 1)] 11:55:58 11:55:58 def create_connection( 11:55:58 address: tuple[str, int], 11:55:58 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:58 source_address: tuple[str, int] | None = None, 11:55:58 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:58 ) -> socket.socket: 11:55:58 """Connect to *address* and return the socket object. 11:55:58 11:55:58 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:58 port)``) and return the socket object. Passing the optional 11:55:58 *timeout* parameter will set the timeout on the socket instance 11:55:58 before attempting to connect. If no *timeout* is supplied, the 11:55:58 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:58 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:58 for the socket to bind as a source address before making the connection. 11:55:58 An host of '' or port 0 tells the OS to use the default. 11:55:58 """ 11:55:58 11:55:58 host, port = address 11:55:58 if host.startswith("["): 11:55:58 host = host.strip("[]") 11:55:58 err = None 11:55:58 11:55:58 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:58 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:58 # The original create_connection function always returns all records. 11:55:58 family = allowed_gai_family() 11:55:58 11:55:58 try: 11:55:58 host.encode("idna") 11:55:58 except UnicodeError: 11:55:58 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:58 11:55:58 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:58 af, socktype, proto, canonname, sa = res 11:55:58 sock = None 11:55:58 try: 11:55:58 sock = socket.socket(af, socktype, proto) 11:55:58 11:55:58 # If provided, set socket level options before connecting. 11:55:58 _set_socket_options(sock, socket_options) 11:55:58 11:55:58 if timeout is not _DEFAULT_TIMEOUT: 11:55:58 sock.settimeout(timeout) 11:55:58 if source_address: 11:55:58 sock.bind(source_address) 11:55:58 > sock.connect(sa) 11:55:58 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:58 11:55:58 The above exception was the direct cause of the following exception: 11:55:58 11:55:58 self = 11:55:58 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:58 body = '{"input": {"topology-id": "cf51c729-3699-308a-a7d0-594c6a62ebbb"}}' 11:55:58 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '66', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:58 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:58 redirect = False, assert_same_host = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:58 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:58 decode_content = False, response_kw = {} 11:55:58 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/tapi-topology:get-topology-details', query=None, fragment=None) 11:55:58 destination_scheme = None, conn = None, release_this_conn = True 11:55:58 http_tunnel_required = False, err = None, clean_exit = False 11:55:58 11:55:58 def urlopen( # type: ignore[override] 11:55:58 self, 11:55:58 method: str, 11:55:58 url: str, 11:55:58 body: _TYPE_BODY | None = None, 11:55:58 headers: typing.Mapping[str, str] | None = None, 11:55:58 retries: Retry | bool | int | None = None, 11:55:58 redirect: bool = True, 11:55:58 assert_same_host: bool = True, 11:55:58 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:58 pool_timeout: int | None = None, 11:55:58 release_conn: bool | None = None, 11:55:58 chunked: bool = False, 11:55:58 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:58 preload_content: bool = True, 11:55:58 decode_content: bool = True, 11:55:58 **response_kw: typing.Any, 11:55:58 ) -> BaseHTTPResponse: 11:55:58 """ 11:55:58 Get a connection from the pool and perform an HTTP request. This is the 11:55:58 lowest level call for making a request, so you'll need to specify all 11:55:58 the raw details. 11:55:58 11:55:58 .. note:: 11:55:58 11:55:58 More commonly, it's appropriate to use a convenience method 11:55:58 such as :meth:`request`. 11:55:58 11:55:58 .. note:: 11:55:58 11:55:58 `release_conn` will only behave as expected if 11:55:58 `preload_content=False` because we want to make 11:55:58 `preload_content=False` the default behaviour someday soon without 11:55:58 breaking backwards compatibility. 11:55:58 11:55:58 :param method: 11:55:58 HTTP request method (such as GET, POST, PUT, etc.) 11:55:58 11:55:58 :param url: 11:55:58 The URL to perform the request on. 11:55:58 11:55:58 :param body: 11:55:58 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:58 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:58 11:55:58 :param headers: 11:55:58 Dictionary of custom headers to send, such as User-Agent, 11:55:58 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:58 these headers completely replace any pool-specific headers. 11:55:58 11:55:58 :param retries: 11:55:58 Configure the number of retries to allow before raising a 11:55:58 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:58 11:55:58 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:58 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:58 over different types of retries. 11:55:58 Pass an integer number to retry connection errors that many times, 11:55:58 but no other types of errors. Pass zero to never retry. 11:55:58 11:55:58 If ``False``, then retries are disabled and any exception is raised 11:55:58 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:58 the redirect response will be returned. 11:55:58 11:55:58 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:58 11:55:58 :param redirect: 11:55:58 If True, automatically handle redirects (status codes 301, 302, 11:55:58 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:58 will disable redirect, too. 11:55:58 11:55:58 :param assert_same_host: 11:55:58 If ``True``, will make sure that the host of the pool requests is 11:55:58 consistent else will raise HostChangedError. When ``False``, you can 11:55:58 use the pool on an HTTP proxy and request foreign hosts. 11:55:58 11:55:58 :param timeout: 11:55:58 If specified, overrides the default timeout for this one 11:55:58 request. It may be a float (in seconds) or an instance of 11:55:58 :class:`urllib3.util.Timeout`. 11:55:58 11:55:58 :param pool_timeout: 11:55:58 If set and the pool is set to block=True, then this method will 11:55:58 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:58 connection is available within the time period. 11:55:58 11:55:58 :param bool preload_content: 11:55:58 If True, the response's body will be preloaded into memory. 11:55:58 11:55:58 :param bool decode_content: 11:55:58 If True, will attempt to decode the body based on the 11:55:58 'content-encoding' header. 11:55:58 11:55:58 :param release_conn: 11:55:58 If False, then the urlopen call will not release the connection 11:55:58 back into the pool once a response is received (but will release if 11:55:58 you read the entire contents of the response such as when 11:55:58 `preload_content=True`). This is useful if you're not preloading 11:55:58 the response's content immediately. You will need to call 11:55:58 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:58 back into the pool. If None, it takes the value of ``preload_content`` 11:55:58 which defaults to ``True``. 11:55:58 11:55:58 :param bool chunked: 11:55:58 If True, urllib3 will send the body using chunked transfer 11:55:58 encoding. Otherwise, urllib3 will send the body using the standard 11:55:58 content-length form. Defaults to False. 11:55:58 11:55:58 :param int body_pos: 11:55:58 Position to seek to in file-like body in the event of a retry or 11:55:58 redirect. Typically this won't need to be set because urllib3 will 11:55:58 auto-populate the value when needed. 11:55:58 """ 11:55:58 parsed_url = parse_url(url) 11:55:58 destination_scheme = parsed_url.scheme 11:55:58 11:55:58 if headers is None: 11:55:58 headers = self.headers 11:55:58 11:55:58 if not isinstance(retries, Retry): 11:55:58 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:58 11:55:58 if release_conn is None: 11:55:58 release_conn = preload_content 11:55:58 11:55:58 # Check host 11:55:58 if assert_same_host and not self.is_same_host(url): 11:55:58 raise HostChangedError(self, url, retries) 11:55:58 11:55:58 # Ensure that the URL we're connecting to is properly encoded 11:55:58 if url.startswith("/"): 11:55:58 url = to_str(_encode_target(url)) 11:55:58 else: 11:55:58 url = to_str(parsed_url.url) 11:55:58 11:55:58 conn = None 11:55:58 11:55:58 # Track whether `conn` needs to be released before 11:55:58 # returning/raising/recursing. Update this variable if necessary, and 11:55:58 # leave `release_conn` constant throughout the function. That way, if 11:55:58 # the function recurses, the original value of `release_conn` will be 11:55:58 # passed down into the recursive call, and its value will be respected. 11:55:58 # 11:55:58 # See issue #651 [1] for details. 11:55:58 # 11:55:58 # [1] 11:55:58 release_this_conn = release_conn 11:55:58 11:55:58 http_tunnel_required = connection_requires_http_tunnel( 11:55:58 self.proxy, self.proxy_config, destination_scheme 11:55:58 ) 11:55:58 11:55:58 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:58 # have to copy the headers dict so we can safely change it without those 11:55:58 # changes being reflected in anyone else's copy. 11:55:58 if not http_tunnel_required: 11:55:58 headers = headers.copy() # type: ignore[attr-defined] 11:55:58 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:58 11:55:58 # Must keep the exception bound to a separate variable or else Python 3 11:55:58 # complains about UnboundLocalError. 11:55:58 err = None 11:55:58 11:55:58 # Keep track of whether we cleanly exited the except block. This 11:55:58 # ensures we do proper cleanup in finally. 11:55:58 clean_exit = False 11:55:58 11:55:58 # Rewind body position, if needed. Record current position 11:55:58 # for future rewinds in the event of a redirect/retry. 11:55:58 body_pos = set_file_position(body, body_pos) 11:55:58 11:55:58 try: 11:55:58 # Request a connection from the queue. 11:55:58 timeout_obj = self._get_timeout(timeout) 11:55:58 conn = self._get_conn(timeout=pool_timeout) 11:55:58 11:55:58 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:58 11:55:58 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:58 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:58 try: 11:55:58 self._prepare_proxy(conn) 11:55:58 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:58 self._raise_timeout( 11:55:58 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:58 ) 11:55:58 raise 11:55:58 11:55:58 # If we're going to release the connection in ``finally:``, then 11:55:58 # the response doesn't need to know about the connection. Otherwise 11:55:58 # it will also try to release it and we'll have a double-release 11:55:58 # mess. 11:55:58 response_conn = conn if not release_conn else None 11:55:58 11:55:58 # Make the request on the HTTPConnection object 11:55:58 > response = self._make_request( 11:55:58 conn, 11:55:58 method, 11:55:58 url, 11:55:58 timeout=timeout_obj, 11:55:58 body=body, 11:55:58 headers=headers, 11:55:58 chunked=chunked, 11:55:58 retries=retries, 11:55:58 response_conn=response_conn, 11:55:58 preload_content=preload_content, 11:55:58 decode_content=decode_content, 11:55:58 **response_kw, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:58 conn.request( 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:58 self.endheaders() 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:58 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:58 self.send(msg) 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:58 self.connect() 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:58 self.sock = self._new_conn() 11:55:58 ^^^^^^^^^^^^^^^^ 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = 11:55:58 11:55:58 def _new_conn(self) -> socket.socket: 11:55:58 """Establish a socket connection and set nodelay settings on it. 11:55:58 11:55:58 :return: New socket connection. 11:55:58 """ 11:55:58 try: 11:55:58 sock = connection.create_connection( 11:55:58 (self._dns_host, self.port), 11:55:58 self.timeout, 11:55:58 source_address=self.source_address, 11:55:58 socket_options=self.socket_options, 11:55:58 ) 11:55:58 except socket.gaierror as e: 11:55:58 raise NameResolutionError(self.host, self, e) from e 11:55:58 except SocketTimeout as e: 11:55:58 raise ConnectTimeoutError( 11:55:58 self, 11:55:58 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:58 ) from e 11:55:58 11:55:58 except OSError as e: 11:55:58 > raise NewConnectionError( 11:55:58 self, f"Failed to establish a new connection: {e}" 11:55:58 ) from e 11:55:58 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:58 11:55:58 The above exception was the direct cause of the following exception: 11:55:58 11:55:58 self = 11:55:58 request = , stream = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:58 proxies = OrderedDict() 11:55:58 11:55:58 def send( 11:55:58 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:58 ): 11:55:58 """Sends PreparedRequest object. Returns Response object. 11:55:58 11:55:58 :param request: The :class:`PreparedRequest ` being sent. 11:55:58 :param stream: (optional) Whether to stream the request content. 11:55:58 :param timeout: (optional) How long to wait for the server to send 11:55:58 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:58 read timeout) ` tuple. 11:55:58 :type timeout: float or tuple or urllib3 Timeout object 11:55:58 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:58 we verify the server's TLS certificate, or a string, in which case it 11:55:58 must be a path to a CA bundle to use 11:55:58 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:58 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:58 :rtype: requests.Response 11:55:58 """ 11:55:58 11:55:58 try: 11:55:58 conn = self.get_connection_with_tls_context( 11:55:58 request, verify, proxies=proxies, cert=cert 11:55:58 ) 11:55:58 except LocationValueError as e: 11:55:58 raise InvalidURL(e, request=request) 11:55:58 11:55:58 self.cert_verify(conn, request.url, verify, cert) 11:55:58 url = self.request_url(request, proxies) 11:55:58 self.add_headers( 11:55:58 request, 11:55:58 stream=stream, 11:55:58 timeout=timeout, 11:55:58 verify=verify, 11:55:58 cert=cert, 11:55:58 proxies=proxies, 11:55:58 ) 11:55:58 11:55:58 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:58 11:55:58 if isinstance(timeout, tuple): 11:55:58 try: 11:55:58 connect, read = timeout 11:55:58 timeout = TimeoutSauce(connect=connect, read=read) 11:55:58 except ValueError: 11:55:58 raise ValueError( 11:55:58 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:58 f"or a single float to set both timeouts to the same value." 11:55:58 ) 11:55:58 elif isinstance(timeout, TimeoutSauce): 11:55:58 pass 11:55:58 else: 11:55:58 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:58 11:55:58 try: 11:55:58 > resp = conn.urlopen( 11:55:58 method=request.method, 11:55:58 url=url, 11:55:58 body=request.body, 11:55:58 headers=request.headers, 11:55:58 redirect=False, 11:55:58 assert_same_host=False, 11:55:58 preload_content=False, 11:55:58 decode_content=False, 11:55:58 retries=self.max_retries, 11:55:58 timeout=timeout, 11:55:58 chunked=chunked, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:58 retries = retries.increment( 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:58 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:58 response = None 11:55:58 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:58 _pool = 11:55:58 _stacktrace = 11:55:58 11:55:58 def increment( 11:55:58 self, 11:55:58 method: str | None = None, 11:55:58 url: str | None = None, 11:55:58 response: BaseHTTPResponse | None = None, 11:55:58 error: Exception | None = None, 11:55:58 _pool: ConnectionPool | None = None, 11:55:58 _stacktrace: TracebackType | None = None, 11:55:58 ) -> Self: 11:55:58 """Return a new Retry object with incremented retry counters. 11:55:58 11:55:58 :param response: A response object, or None, if the server did not 11:55:58 return a response. 11:55:58 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:58 :param Exception error: An error encountered during the request, or 11:55:58 None if the response was received successfully. 11:55:58 11:55:58 :return: A new ``Retry`` object. 11:55:58 """ 11:55:58 if self.total is False and error: 11:55:58 # Disabled, indicate to re-raise the error. 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 11:55:58 total = self.total 11:55:58 if total is not None: 11:55:58 total -= 1 11:55:58 11:55:58 connect = self.connect 11:55:58 read = self.read 11:55:58 redirect = self.redirect 11:55:58 status_count = self.status 11:55:58 other = self.other 11:55:58 cause = "unknown" 11:55:58 status = None 11:55:58 redirect_location = None 11:55:58 11:55:58 if error and self._is_connection_error(error): 11:55:58 # Connect retry? 11:55:58 if connect is False: 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 elif connect is not None: 11:55:58 connect -= 1 11:55:58 11:55:58 elif error and self._is_read_error(error): 11:55:58 # Read retry? 11:55:58 if read is False or method is None or not self._is_method_retryable(method): 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 elif read is not None: 11:55:58 read -= 1 11:55:58 11:55:58 elif error: 11:55:58 # Other retry? 11:55:58 if other is not None: 11:55:58 other -= 1 11:55:58 11:55:58 elif response and response.get_redirect_location(): 11:55:58 # Redirect retry? 11:55:58 if redirect is not None: 11:55:58 redirect -= 1 11:55:58 cause = "too many redirects" 11:55:58 response_redirect_location = response.get_redirect_location() 11:55:58 if response_redirect_location: 11:55:58 redirect_location = response_redirect_location 11:55:58 status = response.status 11:55:58 11:55:58 else: 11:55:58 # Incrementing because of a server error like a 500 in 11:55:58 # status_forcelist and the given method is in the allowed_methods 11:55:58 cause = ResponseError.GENERIC_ERROR 11:55:58 if response and response.status: 11:55:58 if status_count is not None: 11:55:58 status_count -= 1 11:55:58 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:58 status = response.status 11:55:58 11:55:58 history = self.history + ( 11:55:58 RequestHistory(method, url, error, status, redirect_location), 11:55:58 ) 11:55:58 11:55:58 new_retry = self.new( 11:55:58 total=total, 11:55:58 connect=connect, 11:55:58 read=read, 11:55:58 redirect=redirect, 11:55:58 status=status_count, 11:55:58 other=other, 11:55:58 history=history, 11:55:58 ) 11:55:58 11:55:58 if new_retry.is_exhausted(): 11:55:58 reason = error or ResponseError(cause) 11:55:58 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:58 11:55:58 During handling of the above exception, another exception occurred: 11:55:58 11:55:58 self = 11:55:58 11:55:58 def test_10_check_tapi_topos(self): 11:55:58 > self.test_01_get_tapi_topology_T100G() 11:55:58 11:55:58 transportpce_tests/tapi/test01_abstracted_topology.py:248: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 transportpce_tests/tapi/test01_abstracted_topology.py:182: in test_01_get_tapi_topology_T100G 11:55:58 response = test_utils.transportpce_api_rpc_request( 11:55:58 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:58 response = post_request(url, data) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 transportpce_tests/common/test_utils.py:142: in post_request 11:55:58 return requests.request( 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:58 return session.request(method=method, url=url, **kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:58 resp = self.send(prep, **send_kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:58 r = adapter.send(request, **kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = 11:55:58 request = , stream = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:58 proxies = OrderedDict() 11:55:58 11:55:58 def send( 11:55:58 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:58 ): 11:55:58 """Sends PreparedRequest object. Returns Response object. 11:55:58 11:55:58 :param request: The :class:`PreparedRequest ` being sent. 11:55:58 :param stream: (optional) Whether to stream the request content. 11:55:58 :param timeout: (optional) How long to wait for the server to send 11:55:58 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:58 read timeout) ` tuple. 11:55:58 :type timeout: float or tuple or urllib3 Timeout object 11:55:58 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:58 we verify the server's TLS certificate, or a string, in which case it 11:55:58 must be a path to a CA bundle to use 11:55:58 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:58 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:58 :rtype: requests.Response 11:55:58 """ 11:55:58 11:55:58 try: 11:55:58 conn = self.get_connection_with_tls_context( 11:55:58 request, verify, proxies=proxies, cert=cert 11:55:58 ) 11:55:58 except LocationValueError as e: 11:55:58 raise InvalidURL(e, request=request) 11:55:58 11:55:58 self.cert_verify(conn, request.url, verify, cert) 11:55:58 url = self.request_url(request, proxies) 11:55:58 self.add_headers( 11:55:58 request, 11:55:58 stream=stream, 11:55:58 timeout=timeout, 11:55:58 verify=verify, 11:55:58 cert=cert, 11:55:58 proxies=proxies, 11:55:58 ) 11:55:58 11:55:58 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:58 11:55:58 if isinstance(timeout, tuple): 11:55:58 try: 11:55:58 connect, read = timeout 11:55:58 timeout = TimeoutSauce(connect=connect, read=read) 11:55:58 except ValueError: 11:55:58 raise ValueError( 11:55:58 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:58 f"or a single float to set both timeouts to the same value." 11:55:58 ) 11:55:58 elif isinstance(timeout, TimeoutSauce): 11:55:58 pass 11:55:58 else: 11:55:58 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:58 11:55:58 try: 11:55:58 resp = conn.urlopen( 11:55:58 method=request.method, 11:55:58 url=url, 11:55:58 body=request.body, 11:55:58 headers=request.headers, 11:55:58 redirect=False, 11:55:58 assert_same_host=False, 11:55:58 preload_content=False, 11:55:58 decode_content=False, 11:55:58 retries=self.max_retries, 11:55:58 timeout=timeout, 11:55:58 chunked=chunked, 11:55:58 ) 11:55:58 11:55:58 except (ProtocolError, OSError) as err: 11:55:58 raise ConnectionError(err, request=request) 11:55:58 11:55:58 except MaxRetryError as e: 11:55:58 if isinstance(e.reason, ConnectTimeoutError): 11:55:58 # TODO: Remove this in 3.0.0: see #2811 11:55:58 if not isinstance(e.reason, NewConnectionError): 11:55:58 raise ConnectTimeout(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, ResponseError): 11:55:58 raise RetryError(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, _ProxyError): 11:55:58 raise ProxyError(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, _SSLError): 11:55:58 # This branch is for urllib3 v1.22 and later. 11:55:58 raise SSLError(e, request=request) 11:55:58 11:55:58 > raise ConnectionError(e, request=request) 11:55:58 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:58 ----------------------------- Captured stdout call ----------------------------- 11:55:58 execution of test_10_check_tapi_topos 11:55:58 _________ TransportTapitesting.test_11_connect_xpdra_n1_to_roadma_pp1 __________ 11:55:58 11:55:58 self = 11:55:58 11:55:58 def _new_conn(self) -> socket.socket: 11:55:58 """Establish a socket connection and set nodelay settings on it. 11:55:58 11:55:58 :return: New socket connection. 11:55:58 """ 11:55:58 try: 11:55:58 > sock = connection.create_connection( 11:55:58 (self._dns_host, self.port), 11:55:58 self.timeout, 11:55:58 source_address=self.source_address, 11:55:58 socket_options=self.socket_options, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:58 raise err 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:58 socket_options = [(6, 1, 1)] 11:55:58 11:55:58 def create_connection( 11:55:58 address: tuple[str, int], 11:55:58 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:58 source_address: tuple[str, int] | None = None, 11:55:58 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:58 ) -> socket.socket: 11:55:58 """Connect to *address* and return the socket object. 11:55:58 11:55:58 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:58 port)``) and return the socket object. Passing the optional 11:55:58 *timeout* parameter will set the timeout on the socket instance 11:55:58 before attempting to connect. If no *timeout* is supplied, the 11:55:58 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:58 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:58 for the socket to bind as a source address before making the connection. 11:55:58 An host of '' or port 0 tells the OS to use the default. 11:55:58 """ 11:55:58 11:55:58 host, port = address 11:55:58 if host.startswith("["): 11:55:58 host = host.strip("[]") 11:55:58 err = None 11:55:58 11:55:58 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:58 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:58 # The original create_connection function always returns all records. 11:55:58 family = allowed_gai_family() 11:55:58 11:55:58 try: 11:55:58 host.encode("idna") 11:55:58 except UnicodeError: 11:55:58 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:58 11:55:58 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:58 af, socktype, proto, canonname, sa = res 11:55:58 sock = None 11:55:58 try: 11:55:58 sock = socket.socket(af, socktype, proto) 11:55:58 11:55:58 # If provided, set socket level options before connecting. 11:55:58 _set_socket_options(sock, socket_options) 11:55:58 11:55:58 if timeout is not _DEFAULT_TIMEOUT: 11:55:58 sock.settimeout(timeout) 11:55:58 if source_address: 11:55:58 sock.bind(source_address) 11:55:58 > sock.connect(sa) 11:55:58 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:58 11:55:58 The above exception was the direct cause of the following exception: 11:55:58 11:55:58 self = 11:55:58 method = 'POST' 11:55:58 url = '/rests/operations/transportpce-networkutils:init-xpdr-rdm-links' 11:55:58 body = '{"input": {"links-input": {"xpdr-node": "XPDR-A1", "xpdr-num": "1", "network-num": "1", "rdm-node": "ROADM-A1", "srg-num": "1", "termination-point-num": "SRG1-PP1-TXRX"}}}' 11:55:58 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '171', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:58 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:58 redirect = False, assert_same_host = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:58 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:58 decode_content = False, response_kw = {} 11:55:58 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/transportpce-networkutils:init-xpdr-rdm-links', query=None, fragment=None) 11:55:58 destination_scheme = None, conn = None, release_this_conn = True 11:55:58 http_tunnel_required = False, err = None, clean_exit = False 11:55:58 11:55:58 def urlopen( # type: ignore[override] 11:55:58 self, 11:55:58 method: str, 11:55:58 url: str, 11:55:58 body: _TYPE_BODY | None = None, 11:55:58 headers: typing.Mapping[str, str] | None = None, 11:55:58 retries: Retry | bool | int | None = None, 11:55:58 redirect: bool = True, 11:55:58 assert_same_host: bool = True, 11:55:58 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:58 pool_timeout: int | None = None, 11:55:58 release_conn: bool | None = None, 11:55:58 chunked: bool = False, 11:55:58 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:58 preload_content: bool = True, 11:55:58 decode_content: bool = True, 11:55:58 **response_kw: typing.Any, 11:55:58 ) -> BaseHTTPResponse: 11:55:58 """ 11:55:58 Get a connection from the pool and perform an HTTP request. This is the 11:55:58 lowest level call for making a request, so you'll need to specify all 11:55:58 the raw details. 11:55:58 11:55:58 .. note:: 11:55:58 11:55:58 More commonly, it's appropriate to use a convenience method 11:55:58 such as :meth:`request`. 11:55:58 11:55:58 .. note:: 11:55:58 11:55:58 `release_conn` will only behave as expected if 11:55:58 `preload_content=False` because we want to make 11:55:58 `preload_content=False` the default behaviour someday soon without 11:55:58 breaking backwards compatibility. 11:55:58 11:55:58 :param method: 11:55:58 HTTP request method (such as GET, POST, PUT, etc.) 11:55:58 11:55:58 :param url: 11:55:58 The URL to perform the request on. 11:55:58 11:55:58 :param body: 11:55:58 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:58 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:58 11:55:58 :param headers: 11:55:58 Dictionary of custom headers to send, such as User-Agent, 11:55:58 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:58 these headers completely replace any pool-specific headers. 11:55:58 11:55:58 :param retries: 11:55:58 Configure the number of retries to allow before raising a 11:55:58 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:58 11:55:58 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:58 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:58 over different types of retries. 11:55:58 Pass an integer number to retry connection errors that many times, 11:55:58 but no other types of errors. Pass zero to never retry. 11:55:58 11:55:58 If ``False``, then retries are disabled and any exception is raised 11:55:58 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:58 the redirect response will be returned. 11:55:58 11:55:58 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:58 11:55:58 :param redirect: 11:55:58 If True, automatically handle redirects (status codes 301, 302, 11:55:58 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:58 will disable redirect, too. 11:55:58 11:55:58 :param assert_same_host: 11:55:58 If ``True``, will make sure that the host of the pool requests is 11:55:58 consistent else will raise HostChangedError. When ``False``, you can 11:55:58 use the pool on an HTTP proxy and request foreign hosts. 11:55:58 11:55:58 :param timeout: 11:55:58 If specified, overrides the default timeout for this one 11:55:58 request. It may be a float (in seconds) or an instance of 11:55:58 :class:`urllib3.util.Timeout`. 11:55:58 11:55:58 :param pool_timeout: 11:55:58 If set and the pool is set to block=True, then this method will 11:55:58 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:58 connection is available within the time period. 11:55:58 11:55:58 :param bool preload_content: 11:55:58 If True, the response's body will be preloaded into memory. 11:55:58 11:55:58 :param bool decode_content: 11:55:58 If True, will attempt to decode the body based on the 11:55:58 'content-encoding' header. 11:55:58 11:55:58 :param release_conn: 11:55:58 If False, then the urlopen call will not release the connection 11:55:58 back into the pool once a response is received (but will release if 11:55:58 you read the entire contents of the response such as when 11:55:58 `preload_content=True`). This is useful if you're not preloading 11:55:58 the response's content immediately. You will need to call 11:55:58 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:58 back into the pool. If None, it takes the value of ``preload_content`` 11:55:58 which defaults to ``True``. 11:55:58 11:55:58 :param bool chunked: 11:55:58 If True, urllib3 will send the body using chunked transfer 11:55:58 encoding. Otherwise, urllib3 will send the body using the standard 11:55:58 content-length form. Defaults to False. 11:55:58 11:55:58 :param int body_pos: 11:55:58 Position to seek to in file-like body in the event of a retry or 11:55:58 redirect. Typically this won't need to be set because urllib3 will 11:55:58 auto-populate the value when needed. 11:55:58 """ 11:55:58 parsed_url = parse_url(url) 11:55:58 destination_scheme = parsed_url.scheme 11:55:58 11:55:58 if headers is None: 11:55:58 headers = self.headers 11:55:58 11:55:58 if not isinstance(retries, Retry): 11:55:58 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:58 11:55:58 if release_conn is None: 11:55:58 release_conn = preload_content 11:55:58 11:55:58 # Check host 11:55:58 if assert_same_host and not self.is_same_host(url): 11:55:58 raise HostChangedError(self, url, retries) 11:55:58 11:55:58 # Ensure that the URL we're connecting to is properly encoded 11:55:58 if url.startswith("/"): 11:55:58 url = to_str(_encode_target(url)) 11:55:58 else: 11:55:58 url = to_str(parsed_url.url) 11:55:58 11:55:58 conn = None 11:55:58 11:55:58 # Track whether `conn` needs to be released before 11:55:58 # returning/raising/recursing. Update this variable if necessary, and 11:55:58 # leave `release_conn` constant throughout the function. That way, if 11:55:58 # the function recurses, the original value of `release_conn` will be 11:55:58 # passed down into the recursive call, and its value will be respected. 11:55:58 # 11:55:58 # See issue #651 [1] for details. 11:55:58 # 11:55:58 # [1] 11:55:58 release_this_conn = release_conn 11:55:58 11:55:58 http_tunnel_required = connection_requires_http_tunnel( 11:55:58 self.proxy, self.proxy_config, destination_scheme 11:55:58 ) 11:55:58 11:55:58 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:58 # have to copy the headers dict so we can safely change it without those 11:55:58 # changes being reflected in anyone else's copy. 11:55:58 if not http_tunnel_required: 11:55:58 headers = headers.copy() # type: ignore[attr-defined] 11:55:58 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:58 11:55:58 # Must keep the exception bound to a separate variable or else Python 3 11:55:58 # complains about UnboundLocalError. 11:55:58 err = None 11:55:58 11:55:58 # Keep track of whether we cleanly exited the except block. This 11:55:58 # ensures we do proper cleanup in finally. 11:55:58 clean_exit = False 11:55:58 11:55:58 # Rewind body position, if needed. Record current position 11:55:58 # for future rewinds in the event of a redirect/retry. 11:55:58 body_pos = set_file_position(body, body_pos) 11:55:58 11:55:58 try: 11:55:58 # Request a connection from the queue. 11:55:58 timeout_obj = self._get_timeout(timeout) 11:55:58 conn = self._get_conn(timeout=pool_timeout) 11:55:58 11:55:58 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:58 11:55:58 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:58 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:58 try: 11:55:58 self._prepare_proxy(conn) 11:55:58 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:58 self._raise_timeout( 11:55:58 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:58 ) 11:55:58 raise 11:55:58 11:55:58 # If we're going to release the connection in ``finally:``, then 11:55:58 # the response doesn't need to know about the connection. Otherwise 11:55:58 # it will also try to release it and we'll have a double-release 11:55:58 # mess. 11:55:58 response_conn = conn if not release_conn else None 11:55:58 11:55:58 # Make the request on the HTTPConnection object 11:55:58 > response = self._make_request( 11:55:58 conn, 11:55:58 method, 11:55:58 url, 11:55:58 timeout=timeout_obj, 11:55:58 body=body, 11:55:58 headers=headers, 11:55:58 chunked=chunked, 11:55:58 retries=retries, 11:55:58 response_conn=response_conn, 11:55:58 preload_content=preload_content, 11:55:58 decode_content=decode_content, 11:55:58 **response_kw, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:58 conn.request( 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:58 self.endheaders() 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:58 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:58 self.send(msg) 11:55:58 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:58 self.connect() 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:58 self.sock = self._new_conn() 11:55:58 ^^^^^^^^^^^^^^^^ 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = 11:55:58 11:55:58 def _new_conn(self) -> socket.socket: 11:55:58 """Establish a socket connection and set nodelay settings on it. 11:55:58 11:55:58 :return: New socket connection. 11:55:58 """ 11:55:58 try: 11:55:58 sock = connection.create_connection( 11:55:58 (self._dns_host, self.port), 11:55:58 self.timeout, 11:55:58 source_address=self.source_address, 11:55:58 socket_options=self.socket_options, 11:55:58 ) 11:55:58 except socket.gaierror as e: 11:55:58 raise NameResolutionError(self.host, self, e) from e 11:55:58 except SocketTimeout as e: 11:55:58 raise ConnectTimeoutError( 11:55:58 self, 11:55:58 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:58 ) from e 11:55:58 11:55:58 except OSError as e: 11:55:58 > raise NewConnectionError( 11:55:58 self, f"Failed to establish a new connection: {e}" 11:55:58 ) from e 11:55:58 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:58 11:55:58 The above exception was the direct cause of the following exception: 11:55:58 11:55:58 self = 11:55:58 request = , stream = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:58 proxies = OrderedDict() 11:55:58 11:55:58 def send( 11:55:58 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:58 ): 11:55:58 """Sends PreparedRequest object. Returns Response object. 11:55:58 11:55:58 :param request: The :class:`PreparedRequest ` being sent. 11:55:58 :param stream: (optional) Whether to stream the request content. 11:55:58 :param timeout: (optional) How long to wait for the server to send 11:55:58 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:58 read timeout) ` tuple. 11:55:58 :type timeout: float or tuple or urllib3 Timeout object 11:55:58 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:58 we verify the server's TLS certificate, or a string, in which case it 11:55:58 must be a path to a CA bundle to use 11:55:58 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:58 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:58 :rtype: requests.Response 11:55:58 """ 11:55:58 11:55:58 try: 11:55:58 conn = self.get_connection_with_tls_context( 11:55:58 request, verify, proxies=proxies, cert=cert 11:55:58 ) 11:55:58 except LocationValueError as e: 11:55:58 raise InvalidURL(e, request=request) 11:55:58 11:55:58 self.cert_verify(conn, request.url, verify, cert) 11:55:58 url = self.request_url(request, proxies) 11:55:58 self.add_headers( 11:55:58 request, 11:55:58 stream=stream, 11:55:58 timeout=timeout, 11:55:58 verify=verify, 11:55:58 cert=cert, 11:55:58 proxies=proxies, 11:55:58 ) 11:55:58 11:55:58 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:58 11:55:58 if isinstance(timeout, tuple): 11:55:58 try: 11:55:58 connect, read = timeout 11:55:58 timeout = TimeoutSauce(connect=connect, read=read) 11:55:58 except ValueError: 11:55:58 raise ValueError( 11:55:58 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:58 f"or a single float to set both timeouts to the same value." 11:55:58 ) 11:55:58 elif isinstance(timeout, TimeoutSauce): 11:55:58 pass 11:55:58 else: 11:55:58 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:58 11:55:58 try: 11:55:58 > resp = conn.urlopen( 11:55:58 method=request.method, 11:55:58 url=url, 11:55:58 body=request.body, 11:55:58 headers=request.headers, 11:55:58 redirect=False, 11:55:58 assert_same_host=False, 11:55:58 preload_content=False, 11:55:58 decode_content=False, 11:55:58 retries=self.max_retries, 11:55:58 timeout=timeout, 11:55:58 chunked=chunked, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:58 retries = retries.increment( 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:58 method = 'POST' 11:55:58 url = '/rests/operations/transportpce-networkutils:init-xpdr-rdm-links' 11:55:58 response = None 11:55:58 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:58 _pool = 11:55:58 _stacktrace = 11:55:58 11:55:58 def increment( 11:55:58 self, 11:55:58 method: str | None = None, 11:55:58 url: str | None = None, 11:55:58 response: BaseHTTPResponse | None = None, 11:55:58 error: Exception | None = None, 11:55:58 _pool: ConnectionPool | None = None, 11:55:58 _stacktrace: TracebackType | None = None, 11:55:58 ) -> Self: 11:55:58 """Return a new Retry object with incremented retry counters. 11:55:58 11:55:58 :param response: A response object, or None, if the server did not 11:55:58 return a response. 11:55:58 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:58 :param Exception error: An error encountered during the request, or 11:55:58 None if the response was received successfully. 11:55:58 11:55:58 :return: A new ``Retry`` object. 11:55:58 """ 11:55:58 if self.total is False and error: 11:55:58 # Disabled, indicate to re-raise the error. 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 11:55:58 total = self.total 11:55:58 if total is not None: 11:55:58 total -= 1 11:55:58 11:55:58 connect = self.connect 11:55:58 read = self.read 11:55:58 redirect = self.redirect 11:55:58 status_count = self.status 11:55:58 other = self.other 11:55:58 cause = "unknown" 11:55:58 status = None 11:55:58 redirect_location = None 11:55:58 11:55:58 if error and self._is_connection_error(error): 11:55:58 # Connect retry? 11:55:58 if connect is False: 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 elif connect is not None: 11:55:58 connect -= 1 11:55:58 11:55:58 elif error and self._is_read_error(error): 11:55:58 # Read retry? 11:55:58 if read is False or method is None or not self._is_method_retryable(method): 11:55:58 raise reraise(type(error), error, _stacktrace) 11:55:58 elif read is not None: 11:55:58 read -= 1 11:55:58 11:55:58 elif error: 11:55:58 # Other retry? 11:55:58 if other is not None: 11:55:58 other -= 1 11:55:58 11:55:58 elif response and response.get_redirect_location(): 11:55:58 # Redirect retry? 11:55:58 if redirect is not None: 11:55:58 redirect -= 1 11:55:58 cause = "too many redirects" 11:55:58 response_redirect_location = response.get_redirect_location() 11:55:58 if response_redirect_location: 11:55:58 redirect_location = response_redirect_location 11:55:58 status = response.status 11:55:58 11:55:58 else: 11:55:58 # Incrementing because of a server error like a 500 in 11:55:58 # status_forcelist and the given method is in the allowed_methods 11:55:58 cause = ResponseError.GENERIC_ERROR 11:55:58 if response and response.status: 11:55:58 if status_count is not None: 11:55:58 status_count -= 1 11:55:58 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:58 status = response.status 11:55:58 11:55:58 history = self.history + ( 11:55:58 RequestHistory(method, url, error, status, redirect_location), 11:55:58 ) 11:55:58 11:55:58 new_retry = self.new( 11:55:58 total=total, 11:55:58 connect=connect, 11:55:58 read=read, 11:55:58 redirect=redirect, 11:55:58 status=status_count, 11:55:58 other=other, 11:55:58 history=history, 11:55:58 ) 11:55:58 11:55:58 if new_retry.is_exhausted(): 11:55:58 reason = error or ResponseError(cause) 11:55:58 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/transportpce-networkutils:init-xpdr-rdm-links (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:58 11:55:58 During handling of the above exception, another exception occurred: 11:55:58 11:55:58 self = 11:55:58 11:55:58 def test_11_connect_xpdra_n1_to_roadma_pp1(self): 11:55:58 > response = test_utils.transportpce_api_rpc_request( 11:55:58 'transportpce-networkutils', 'init-xpdr-rdm-links', 11:55:58 {'links-input': {'xpdr-node': 'XPDR-A1', 'xpdr-num': '1', 'network-num': '1', 11:55:58 'rdm-node': 'ROADM-A1', 'srg-num': '1', 'termination-point-num': 'SRG1-PP1-TXRX'}}) 11:55:58 11:55:58 transportpce_tests/tapi/test01_abstracted_topology.py:270: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:58 response = post_request(url, data) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 transportpce_tests/common/test_utils.py:142: in post_request 11:55:58 return requests.request( 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:58 return session.request(method=method, url=url, **kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:58 resp = self.send(prep, **send_kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:58 r = adapter.send(request, **kwargs) 11:55:58 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 self = 11:55:58 request = , stream = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:58 proxies = OrderedDict() 11:55:58 11:55:58 def send( 11:55:58 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:58 ): 11:55:58 """Sends PreparedRequest object. Returns Response object. 11:55:58 11:55:58 :param request: The :class:`PreparedRequest ` being sent. 11:55:58 :param stream: (optional) Whether to stream the request content. 11:55:58 :param timeout: (optional) How long to wait for the server to send 11:55:58 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:58 read timeout) ` tuple. 11:55:58 :type timeout: float or tuple or urllib3 Timeout object 11:55:58 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:58 we verify the server's TLS certificate, or a string, in which case it 11:55:58 must be a path to a CA bundle to use 11:55:58 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:58 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:58 :rtype: requests.Response 11:55:58 """ 11:55:58 11:55:58 try: 11:55:58 conn = self.get_connection_with_tls_context( 11:55:58 request, verify, proxies=proxies, cert=cert 11:55:58 ) 11:55:58 except LocationValueError as e: 11:55:58 raise InvalidURL(e, request=request) 11:55:58 11:55:58 self.cert_verify(conn, request.url, verify, cert) 11:55:58 url = self.request_url(request, proxies) 11:55:58 self.add_headers( 11:55:58 request, 11:55:58 stream=stream, 11:55:58 timeout=timeout, 11:55:58 verify=verify, 11:55:58 cert=cert, 11:55:58 proxies=proxies, 11:55:58 ) 11:55:58 11:55:58 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:58 11:55:58 if isinstance(timeout, tuple): 11:55:58 try: 11:55:58 connect, read = timeout 11:55:58 timeout = TimeoutSauce(connect=connect, read=read) 11:55:58 except ValueError: 11:55:58 raise ValueError( 11:55:58 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:58 f"or a single float to set both timeouts to the same value." 11:55:58 ) 11:55:58 elif isinstance(timeout, TimeoutSauce): 11:55:58 pass 11:55:58 else: 11:55:58 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:58 11:55:58 try: 11:55:58 resp = conn.urlopen( 11:55:58 method=request.method, 11:55:58 url=url, 11:55:58 body=request.body, 11:55:58 headers=request.headers, 11:55:58 redirect=False, 11:55:58 assert_same_host=False, 11:55:58 preload_content=False, 11:55:58 decode_content=False, 11:55:58 retries=self.max_retries, 11:55:58 timeout=timeout, 11:55:58 chunked=chunked, 11:55:58 ) 11:55:58 11:55:58 except (ProtocolError, OSError) as err: 11:55:58 raise ConnectionError(err, request=request) 11:55:58 11:55:58 except MaxRetryError as e: 11:55:58 if isinstance(e.reason, ConnectTimeoutError): 11:55:58 # TODO: Remove this in 3.0.0: see #2811 11:55:58 if not isinstance(e.reason, NewConnectionError): 11:55:58 raise ConnectTimeout(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, ResponseError): 11:55:58 raise RetryError(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, _ProxyError): 11:55:58 raise ProxyError(e, request=request) 11:55:58 11:55:58 if isinstance(e.reason, _SSLError): 11:55:58 # This branch is for urllib3 v1.22 and later. 11:55:58 raise SSLError(e, request=request) 11:55:58 11:55:58 > raise ConnectionError(e, request=request) 11:55:58 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/transportpce-networkutils:init-xpdr-rdm-links (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:58 ----------------------------- Captured stdout call ----------------------------- 11:55:58 execution of test_11_connect_xpdra_n1_to_roadma_pp1 11:55:58 _________ TransportTapitesting.test_12_connect_roadma_pp1_to_xpdra_n1 __________ 11:55:58 11:55:58 self = 11:55:58 11:55:58 def _new_conn(self) -> socket.socket: 11:55:58 """Establish a socket connection and set nodelay settings on it. 11:55:58 11:55:58 :return: New socket connection. 11:55:58 """ 11:55:58 try: 11:55:58 > sock = connection.create_connection( 11:55:58 (self._dns_host, self.port), 11:55:58 self.timeout, 11:55:58 source_address=self.source_address, 11:55:58 socket_options=self.socket_options, 11:55:58 ) 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:58 raise err 11:55:58 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:58 11:55:58 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:58 socket_options = [(6, 1, 1)] 11:55:58 11:55:58 def create_connection( 11:55:58 address: tuple[str, int], 11:55:58 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:58 source_address: tuple[str, int] | None = None, 11:55:58 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:58 ) -> socket.socket: 11:55:58 """Connect to *address* and return the socket object. 11:55:58 11:55:58 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:58 port)``) and return the socket object. Passing the optional 11:55:58 *timeout* parameter will set the timeout on the socket instance 11:55:58 before attempting to connect. If no *timeout* is supplied, the 11:55:58 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:58 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:58 for the socket to bind as a source address before making the connection. 11:55:58 An host of '' or port 0 tells the OS to use the default. 11:55:58 """ 11:55:58 11:55:58 host, port = address 11:55:58 if host.startswith("["): 11:55:58 host = host.strip("[]") 11:55:58 err = None 11:55:58 11:55:58 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:58 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:58 # The original create_connection function always returns all records. 11:55:58 family = allowed_gai_family() 11:55:58 11:55:58 try: 11:55:58 host.encode("idna") 11:55:58 except UnicodeError: 11:55:58 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:58 11:55:58 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:58 af, socktype, proto, canonname, sa = res 11:55:58 sock = None 11:55:58 try: 11:55:58 sock = socket.socket(af, socktype, proto) 11:55:58 11:55:58 # If provided, set socket level options before connecting. 11:55:58 _set_socket_options(sock, socket_options) 11:55:58 11:55:58 if timeout is not _DEFAULT_TIMEOUT: 11:55:58 sock.settimeout(timeout) 11:55:58 if source_address: 11:55:58 sock.bind(source_address) 11:55:58 > sock.connect(sa) 11:55:58 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:58 11:55:58 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:58 11:55:58 The above exception was the direct cause of the following exception: 11:55:58 11:55:58 self = 11:55:58 method = 'POST' 11:55:58 url = '/rests/operations/transportpce-networkutils:init-rdm-xpdr-links' 11:55:58 body = '{"input": {"links-input": {"xpdr-node": "XPDR-A1", "xpdr-num": "1", "network-num": "1", "rdm-node": "ROADM-A1", "srg-num": "1", "termination-point-num": "SRG1-PP1-TXRX"}}}' 11:55:58 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '171', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:58 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:58 redirect = False, assert_same_host = False 11:55:58 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:58 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:58 decode_content = False, response_kw = {} 11:55:58 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/transportpce-networkutils:init-rdm-xpdr-links', query=None, fragment=None) 11:55:58 destination_scheme = None, conn = None, release_this_conn = True 11:55:58 http_tunnel_required = False, err = None, clean_exit = False 11:55:58 11:55:58 def urlopen( # type: ignore[override] 11:55:58 self, 11:55:58 method: str, 11:55:58 url: str, 11:55:58 body: _TYPE_BODY | None = None, 11:55:58 headers: typing.Mapping[str, str] | None = None, 11:55:58 retries: Retry | bool | int | None = None, 11:55:58 redirect: bool = True, 11:55:58 assert_same_host: bool = True, 11:55:58 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:58 pool_timeout: int | None = None, 11:55:58 release_conn: bool | None = None, 11:55:58 chunked: bool = False, 11:55:58 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:58 preload_content: bool = True, 11:55:58 decode_content: bool = True, 11:55:58 **response_kw: typing.Any, 11:55:58 ) -> BaseHTTPResponse: 11:55:58 """ 11:55:58 Get a connection from the pool and perform an HTTP request. This is the 11:55:58 lowest level call for making a request, so you'll need to specify all 11:55:58 the raw details. 11:55:58 11:55:58 .. note:: 11:55:58 11:55:58 More commonly, it's appropriate to use a convenience method 11:55:58 such as :meth:`request`. 11:55:58 11:55:58 .. note:: 11:55:58 11:55:58 `release_conn` will only behave as expected if 11:55:58 `preload_content=False` because we want to make 11:55:58 `preload_content=False` the default behaviour someday soon without 11:55:58 breaking backwards compatibility. 11:55:58 11:55:58 :param method: 11:55:58 HTTP request method (such as GET, POST, PUT, etc.) 11:55:58 11:55:58 :param url: 11:55:58 The URL to perform the request on. 11:55:58 11:55:58 :param body: 11:55:58 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:58 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:58 11:55:58 :param headers: 11:55:58 Dictionary of custom headers to send, such as User-Agent, 11:55:58 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:58 these headers completely replace any pool-specific headers. 11:55:58 11:55:58 :param retries: 11:55:58 Configure the number of retries to allow before raising a 11:55:58 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:58 11:55:58 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:58 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:58 over different types of retries. 11:55:58 Pass an integer number to retry connection errors that many times, 11:55:58 but no other types of errors. Pass zero to never retry. 11:55:58 11:55:58 If ``False``, then retries are disabled and any exception is raised 11:55:58 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:58 the redirect response will be returned. 11:55:58 11:55:58 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:58 11:55:58 :param redirect: 11:55:58 If True, automatically handle redirects (status codes 301, 302, 11:55:58 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:58 will disable redirect, too. 11:55:58 11:55:58 :param assert_same_host: 11:55:58 If ``True``, will make sure that the host of the pool requests is 11:55:58 consistent else will raise HostChangedError. When ``False``, you can 11:55:58 use the pool on an HTTP proxy and request foreign hosts. 11:55:58 11:55:58 :param timeout: 11:55:58 If specified, overrides the default timeout for this one 11:55:58 request. It may be a float (in seconds) or an instance of 11:55:58 :class:`urllib3.util.Timeout`. 11:55:58 11:55:58 :param pool_timeout: 11:55:58 If set and the pool is set to block=True, then this method will 11:55:58 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:58 connection is available within the time period. 11:55:58 11:55:58 :param bool preload_content: 11:55:58 If True, the response's body will be preloaded into memory. 11:55:58 11:55:58 :param bool decode_content: 11:55:58 If True, will attempt to decode the body based on the 11:55:58 'content-encoding' header. 11:55:58 11:55:58 :param release_conn: 11:55:58 If False, then the urlopen call will not release the connection 11:55:58 back into the pool once a response is received (but will release if 11:55:58 you read the entire contents of the response such as when 11:55:58 `preload_content=True`). This is useful if you're not preloading 11:55:58 the response's content immediately. You will need to call 11:55:58 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:58 back into the pool. If None, it takes the value of ``preload_content`` 11:55:58 which defaults to ``True``. 11:55:58 11:55:58 :param bool chunked: 11:55:58 If True, urllib3 will send the body using chunked transfer 11:55:58 encoding. Otherwise, urllib3 will send the body using the standard 11:55:58 content-length form. Defaults to False. 11:55:58 11:55:58 :param int body_pos: 11:55:58 Position to seek to in file-like body in the event of a retry or 11:55:58 redirect. Typically this won't need to be set because urllib3 will 11:55:58 auto-populate the value when needed. 11:55:58 """ 11:55:58 parsed_url = parse_url(url) 11:55:58 destination_scheme = parsed_url.scheme 11:55:58 11:55:58 if headers is None: 11:55:58 headers = self.headers 11:55:58 11:55:58 if not isinstance(retries, Retry): 11:55:58 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:58 11:55:58 if release_conn is None: 11:55:58 release_conn = preload_content 11:55:58 11:55:58 # Check host 11:55:58 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'POST' 11:55:59 url = '/rests/operations/transportpce-networkutils:init-rdm-xpdr-links' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/transportpce-networkutils:init-rdm-xpdr-links (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_12_connect_roadma_pp1_to_xpdra_n1(self): 11:55:59 > response = test_utils.transportpce_api_rpc_request( 11:55:59 'transportpce-networkutils', 'init-rdm-xpdr-links', 11:55:59 {'links-input': {'xpdr-node': 'XPDR-A1', 'xpdr-num': '1', 'network-num': '1', 11:55:59 'rdm-node': 'ROADM-A1', 'srg-num': '1', 'termination-point-num': 'SRG1-PP1-TXRX'}}) 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:280: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:59 response = post_request(url, data) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:142: in post_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/transportpce-networkutils:init-rdm-xpdr-links (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_12_connect_roadma_pp1_to_xpdra_n1 11:55:59 ____________ TransportTapitesting.test_13_check_tapi_topology_T100G ____________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:59 body = '{"input": {"topology-id": "cf51c729-3699-308a-a7d0-594c6a62ebbb"}}' 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '66', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/tapi-topology:get-topology-details', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_13_check_tapi_topology_T100G(self): 11:55:59 self.tapi_topo["topology-id"] = test_utils.T100GE_UUID 11:55:59 > response = test_utils.transportpce_api_rpc_request( 11:55:59 'tapi-topology', 'get-topology-details', self.tapi_topo) 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:291: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:59 response = post_request(url, data) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:142: in post_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_13_check_tapi_topology_T100G 11:55:59 _____________ TransportTapitesting.test_14_check_tapi_topology_T0 ______________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:59 body = '{"input": {"topology-id": "747c670e-7a07-3dab-b379-5b1cd17402a3"}}' 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '66', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/tapi-topology:get-topology-details', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_14_check_tapi_topology_T0(self): 11:55:59 self.tapi_topo["topology-id"] = test_utils.T0_MULTILAYER_TOPO_UUID 11:55:59 > response = test_utils.transportpce_api_rpc_request( 11:55:59 'tapi-topology', 'get-topology-details', self.tapi_topo) 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:302: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:59 response = post_request(url, data) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:142: in post_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_14_check_tapi_topology_T0 11:55:59 __________________ TransportTapitesting.test_15_connect_xpdrc __________________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'PUT' 11:55:59 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-C1' 11:55:59 body = '{"node": [{"node-id": "XPDR-C1", "netconf-node-topology:netconf-node": {"netconf-node-topology:host": "127.0.0.1", "n...ff-millis": 1800000, "netconf-node-topology:backoff-multiplier": 1.5, "netconf-node-topology:keepalive-delay": 120}}]}' 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '709', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-C1', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'PUT' 11:55:59 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-C1' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-C1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_15_connect_xpdrc(self): 11:55:59 > response = test_utils.mount_device("XPDR-C1", ('xpdrc', self.NODE_VERSION)) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:317: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:362: in mount_device 11:55:59 response = put_request(url[RESTCONF_VERSION].format('{}', node), body) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:124: in put_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-C1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_15_connect_xpdrc 11:55:59 _________ TransportTapitesting.test_16_connect_xpdrc_n1_to_roadmc_pp1 __________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'POST' 11:55:59 url = '/rests/operations/transportpce-networkutils:init-xpdr-rdm-links' 11:55:59 body = '{"input": {"links-input": {"xpdr-node": "XPDR-C1", "xpdr-num": "1", "network-num": "1", "rdm-node": "ROADM-C1", "srg-num": "1", "termination-point-num": "SRG1-PP1-TXRX"}}}' 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '171', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/transportpce-networkutils:init-xpdr-rdm-links', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'POST' 11:55:59 url = '/rests/operations/transportpce-networkutils:init-xpdr-rdm-links' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/transportpce-networkutils:init-xpdr-rdm-links (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_16_connect_xpdrc_n1_to_roadmc_pp1(self): 11:55:59 > response = test_utils.transportpce_api_rpc_request( 11:55:59 'transportpce-networkutils', 'init-xpdr-rdm-links', 11:55:59 {'links-input': {'xpdr-node': 'XPDR-C1', 'xpdr-num': '1', 'network-num': '1', 11:55:59 'rdm-node': 'ROADM-C1', 'srg-num': '1', 'termination-point-num': 'SRG1-PP1-TXRX'}}) 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:321: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:59 response = post_request(url, data) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:142: in post_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/transportpce-networkutils:init-xpdr-rdm-links (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_16_connect_xpdrc_n1_to_roadmc_pp1 11:55:59 _________ TransportTapitesting.test_17_connect_roadmc_pp1_to_xpdrc_n1 __________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'POST' 11:55:59 url = '/rests/operations/transportpce-networkutils:init-rdm-xpdr-links' 11:55:59 body = '{"input": {"links-input": {"xpdr-node": "XPDR-C1", "xpdr-num": "1", "network-num": "1", "rdm-node": "ROADM-C1", "srg-num": "1", "termination-point-num": "SRG1-PP1-TXRX"}}}' 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '171', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/transportpce-networkutils:init-rdm-xpdr-links', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'POST' 11:55:59 url = '/rests/operations/transportpce-networkutils:init-rdm-xpdr-links' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/transportpce-networkutils:init-rdm-xpdr-links (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_17_connect_roadmc_pp1_to_xpdrc_n1(self): 11:55:59 > response = test_utils.transportpce_api_rpc_request( 11:55:59 'transportpce-networkutils', 'init-rdm-xpdr-links', 11:55:59 {'links-input': {'xpdr-node': 'XPDR-C1', 'xpdr-num': '1', 'network-num': '1', 11:55:59 'rdm-node': 'ROADM-C1', 'srg-num': '1', 'termination-point-num': 'SRG1-PP1-TXRX'}}) 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:331: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:59 response = post_request(url, data) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:142: in post_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/transportpce-networkutils:init-rdm-xpdr-links (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_17_connect_roadmc_pp1_to_xpdrc_n1 11:55:59 ____________ TransportTapitesting.test_18_check_tapi_topology_T100G ____________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:59 body = '{"input": {"topology-id": "cf51c729-3699-308a-a7d0-594c6a62ebbb"}}' 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '66', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/tapi-topology:get-topology-details', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_18_check_tapi_topology_T100G(self): 11:55:59 self.tapi_topo["topology-id"] = test_utils.T100GE_UUID 11:55:59 > response = test_utils.transportpce_api_rpc_request( 11:55:59 'tapi-topology', 'get-topology-details', self.tapi_topo) 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:342: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:59 response = post_request(url, data) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:142: in post_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_18_check_tapi_topology_T100G 11:55:59 _____________ TransportTapitesting.test_19_check_tapi_topology_T0 ______________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:59 body = '{"input": {"topology-id": "747c670e-7a07-3dab-b379-5b1cd17402a3"}}' 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '66', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/tapi-topology:get-topology-details', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_19_check_tapi_topology_T0(self): 11:55:59 self.tapi_topo["topology-id"] = test_utils.T0_MULTILAYER_TOPO_UUID 11:55:59 > response = test_utils.transportpce_api_rpc_request( 11:55:59 'tapi-topology', 'get-topology-details', self.tapi_topo) 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:356: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:59 response = post_request(url, data) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:142: in post_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_19_check_tapi_topology_T0 11:55:59 ________________ TransportTapitesting.test_20_connect_spdr_sa1 _________________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'PUT' 11:55:59 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SA1' 11:55:59 body = '{"node": [{"node-id": "SPDR-SA1", "netconf-node-topology:netconf-node": {"netconf-node-topology:host": "127.0.0.1", "...ff-millis": 1800000, "netconf-node-topology:backoff-multiplier": 1.5, "netconf-node-topology:keepalive-delay": 120}}]}' 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '710', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SA1', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'PUT' 11:55:59 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SA1' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SA1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_20_connect_spdr_sa1(self): 11:55:59 > response = test_utils.mount_device("SPDR-SA1", ('spdra', self.NODE_VERSION)) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:371: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:362: in mount_device 11:55:59 response = put_request(url[RESTCONF_VERSION].format('{}', node), body) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:124: in put_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SA1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_20_connect_spdr_sa1 11:55:59 ________________ TransportTapitesting.test_21_connect_spdr_sc1 _________________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'PUT' 11:55:59 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SC1' 11:55:59 body = '{"node": [{"node-id": "SPDR-SC1", "netconf-node-topology:netconf-node": {"netconf-node-topology:host": "127.0.0.1", "...ff-millis": 1800000, "netconf-node-topology:backoff-multiplier": 1.5, "netconf-node-topology:keepalive-delay": 120}}]}' 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '710', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SC1', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'PUT' 11:55:59 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SC1' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SC1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_21_connect_spdr_sc1(self): 11:55:59 > response = test_utils.mount_device("SPDR-SC1", ('spdrc', self.NODE_VERSION)) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:376: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:362: in mount_device 11:55:59 response = put_request(url[RESTCONF_VERSION].format('{}', node), body) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:124: in put_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SC1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_21_connect_spdr_sc1 11:55:59 ____________ TransportTapitesting.test_22_check_tapi_topology_T100G ____________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:59 body = '{"input": {"topology-id": "cf51c729-3699-308a-a7d0-594c6a62ebbb"}}' 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '66', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/tapi-topology:get-topology-details', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_22_check_tapi_topology_T100G(self): 11:55:59 > self.test_18_check_tapi_topology_T100G() 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:381: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:342: in test_18_check_tapi_topology_T100G 11:55:59 response = test_utils.transportpce_api_rpc_request( 11:55:59 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:59 response = post_request(url, data) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:142: in post_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_22_check_tapi_topology_T100G 11:55:59 _____________ TransportTapitesting.test_23_check_tapi_topology_T0 ______________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:59 body = '{"input": {"topology-id": "747c670e-7a07-3dab-b379-5b1cd17402a3"}}' 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '66', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/tapi-topology:get-topology-details', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_23_check_tapi_topology_T0(self): 11:55:59 > self.test_19_check_tapi_topology_T0() 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:384: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:356: in test_19_check_tapi_topology_T0 11:55:59 response = test_utils.transportpce_api_rpc_request( 11:55:59 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:59 response = post_request(url, data) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:142: in post_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_23_check_tapi_topology_T0 11:55:59 _________ TransportTapitesting.test_24_connect_sprda_n1_to_roadma_pp2 __________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'POST' 11:55:59 url = '/rests/operations/transportpce-networkutils:init-xpdr-rdm-links' 11:55:59 body = '{"input": {"links-input": {"xpdr-node": "SPDR-SA1", "xpdr-num": "1", "network-num": "1", "rdm-node": "ROADM-A1", "srg-num": "1", "termination-point-num": "SRG1-PP2-TXRX"}}}' 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '172', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/transportpce-networkutils:init-xpdr-rdm-links', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'POST' 11:55:59 url = '/rests/operations/transportpce-networkutils:init-xpdr-rdm-links' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/transportpce-networkutils:init-xpdr-rdm-links (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_24_connect_sprda_n1_to_roadma_pp2(self): 11:55:59 > response = test_utils.transportpce_api_rpc_request( 11:55:59 'transportpce-networkutils', 'init-xpdr-rdm-links', 11:55:59 {'links-input': {'xpdr-node': 'SPDR-SA1', 'xpdr-num': '1', 'network-num': '1', 11:55:59 'rdm-node': 'ROADM-A1', 'srg-num': '1', 'termination-point-num': 'SRG1-PP2-TXRX'}}) 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:387: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:59 response = post_request(url, data) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:142: in post_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/transportpce-networkutils:init-xpdr-rdm-links (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_24_connect_sprda_n1_to_roadma_pp2 11:55:59 _________ TransportTapitesting.test_25_connect_roadma_pp2_to_spdra_n1 __________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'POST' 11:55:59 url = '/rests/operations/transportpce-networkutils:init-rdm-xpdr-links' 11:55:59 body = '{"input": {"links-input": {"xpdr-node": "SPDR-SA1", "xpdr-num": "1", "network-num": "1", "rdm-node": "ROADM-A1", "srg-num": "1", "termination-point-num": "SRG1-PP2-TXRX"}}}' 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '172', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/transportpce-networkutils:init-rdm-xpdr-links', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'POST' 11:55:59 url = '/rests/operations/transportpce-networkutils:init-rdm-xpdr-links' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/transportpce-networkutils:init-rdm-xpdr-links (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_25_connect_roadma_pp2_to_spdra_n1(self): 11:55:59 > response = test_utils.transportpce_api_rpc_request( 11:55:59 'transportpce-networkutils', 'init-rdm-xpdr-links', 11:55:59 {'links-input': {'xpdr-node': 'SPDR-SA1', 'xpdr-num': '1', 'network-num': '1', 11:55:59 'rdm-node': 'ROADM-A1', 'srg-num': '1', 'termination-point-num': 'SRG1-PP2-TXRX'}}) 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:397: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:59 response = post_request(url, data) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:142: in post_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/transportpce-networkutils:init-rdm-xpdr-links (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_25_connect_roadma_pp2_to_spdra_n1 11:55:59 _________ TransportTapitesting.test_26_connect_sprdc_n1_to_roadmc_pp2 __________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'POST' 11:55:59 url = '/rests/operations/transportpce-networkutils:init-xpdr-rdm-links' 11:55:59 body = '{"input": {"links-input": {"xpdr-node": "SPDR-SC1", "xpdr-num": "1", "network-num": "1", "rdm-node": "ROADM-C1", "srg-num": "1", "termination-point-num": "SRG1-PP2-TXRX"}}}' 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '172', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/transportpce-networkutils:init-xpdr-rdm-links', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'POST' 11:55:59 url = '/rests/operations/transportpce-networkutils:init-xpdr-rdm-links' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/transportpce-networkutils:init-xpdr-rdm-links (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_26_connect_sprdc_n1_to_roadmc_pp2(self): 11:55:59 > response = test_utils.transportpce_api_rpc_request( 11:55:59 'transportpce-networkutils', 'init-xpdr-rdm-links', 11:55:59 {'links-input': {'xpdr-node': 'SPDR-SC1', 'xpdr-num': '1', 'network-num': '1', 11:55:59 'rdm-node': 'ROADM-C1', 'srg-num': '1', 'termination-point-num': 'SRG1-PP2-TXRX'}}) 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:407: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:59 response = post_request(url, data) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:142: in post_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/transportpce-networkutils:init-xpdr-rdm-links (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_26_connect_sprdc_n1_to_roadmc_pp2 11:55:59 _________ TransportTapitesting.test_27_connect_roadmc_pp2_to_spdrc_n1 __________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'POST' 11:55:59 url = '/rests/operations/transportpce-networkutils:init-rdm-xpdr-links' 11:55:59 body = '{"input": {"links-input": {"xpdr-node": "SPDR-SC1", "xpdr-num": "1", "network-num": "1", "rdm-node": "ROADM-C1", "srg-num": "1", "termination-point-num": "SRG1-PP2-TXRX"}}}' 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '172', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/transportpce-networkutils:init-rdm-xpdr-links', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'POST' 11:55:59 url = '/rests/operations/transportpce-networkutils:init-rdm-xpdr-links' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/transportpce-networkutils:init-rdm-xpdr-links (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_27_connect_roadmc_pp2_to_spdrc_n1(self): 11:55:59 > response = test_utils.transportpce_api_rpc_request( 11:55:59 'transportpce-networkutils', 'init-rdm-xpdr-links', 11:55:59 {'links-input': {'xpdr-node': 'SPDR-SC1', 'xpdr-num': '1', 'network-num': '1', 11:55:59 'rdm-node': 'ROADM-C1', 'srg-num': '1', 'termination-point-num': 'SRG1-PP2-TXRX'}}) 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:417: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:59 response = post_request(url, data) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:142: in post_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/transportpce-networkutils:init-rdm-xpdr-links (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_27_connect_roadmc_pp2_to_spdrc_n1 11:55:59 ____________ TransportTapitesting.test_28_check_tapi_topology_T100G ____________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:59 body = '{"input": {"topology-id": "cf51c729-3699-308a-a7d0-594c6a62ebbb"}}' 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '66', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/tapi-topology:get-topology-details', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_28_check_tapi_topology_T100G(self): 11:55:59 > self.test_18_check_tapi_topology_T100G() 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:427: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:342: in test_18_check_tapi_topology_T100G 11:55:59 response = test_utils.transportpce_api_rpc_request( 11:55:59 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:59 response = post_request(url, data) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:142: in post_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_28_check_tapi_topology_T100G 11:55:59 _____________ TransportTapitesting.test_29_check_tapi_topology_T0 ______________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:59 body = '{"input": {"topology-id": "747c670e-7a07-3dab-b379-5b1cd17402a3"}}' 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '66', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/tapi-topology:get-topology-details', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_29_check_tapi_topology_T0(self): 11:55:59 self.tapi_topo["topology-id"] = test_utils.T0_MULTILAYER_TOPO_UUID 11:55:59 > response = test_utils.transportpce_api_rpc_request( 11:55:59 'tapi-topology', 'get-topology-details', self.tapi_topo) 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:431: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:59 response = post_request(url, data) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:142: in post_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_29_check_tapi_topology_T0 11:55:59 _______________ TransportTapitesting.test_30_add_oms_attributes ________________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'PUT' 11:55:59 url = '/rests/data/ietf-network:networks/network=openroadm-topology/ietf-network-topology:link=ROADM-A1-DEG2-DEG2-TTP-TXRXtoROADM-C1-DEG1-DEG1-TTP-TXRX/org-openroadm-network-topology:OMS-attributes/span' 11:55:59 body = '{"span": {"auto-spanloss": "true", "spanloss-base": 11.4, "spanloss-current": 12, "engineered-spanloss": 12.2, "link-concatenation": [{"SRLG-Id": 0, "fiber-type": "smf", "SRLG-length": 100000, "pmd": 0.5}]}}' 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '207', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/ietf-network:networks/network=openroadm-topology/i...2-TTP-TXRXtoROADM-C1-DEG1-DEG1-TTP-TXRX/org-openroadm-network-topology:OMS-attributes/span', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'PUT' 11:55:59 url = '/rests/data/ietf-network:networks/network=openroadm-topology/ietf-network-topology:link=ROADM-A1-DEG2-DEG2-TTP-TXRXtoROADM-C1-DEG1-DEG1-TTP-TXRX/org-openroadm-network-topology:OMS-attributes/span' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-topology/ietf-network-topology:link=ROADM-A1-DEG2-DEG2-TTP-TXRXtoROADM-C1-DEG1-DEG1-TTP-TXRX/org-openroadm-network-topology:OMS-attributes/span (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_30_add_oms_attributes(self): 11:55:59 # Config ROADMA-ROADMC oms-attributes 11:55:59 data = {"span": { 11:55:59 "auto-spanloss": "true", 11:55:59 "spanloss-base": 11.4, 11:55:59 "spanloss-current": 12, 11:55:59 "engineered-spanloss": 12.2, 11:55:59 "link-concatenation": [{ 11:55:59 "SRLG-Id": 0, 11:55:59 "fiber-type": "smf", 11:55:59 "SRLG-length": 100000, 11:55:59 "pmd": 0.5}]}} 11:55:59 > response = test_utils.add_oms_attr_request( 11:55:59 "ROADM-A1-DEG2-DEG2-TTP-TXRXtoROADM-C1-DEG1-DEG1-TTP-TXRX", data) 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:457: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:603: in add_oms_attr_request 11:55:59 response = put_request(url2.format('{}', network, link), oms_attr) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:124: in put_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-topology/ietf-network-topology:link=ROADM-A1-DEG2-DEG2-TTP-TXRXtoROADM-C1-DEG1-DEG1-TTP-TXRX/org-openroadm-network-topology:OMS-attributes/span (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_30_add_oms_attributes 11:55:59 _____________ TransportTapitesting.test_31_create_OCH_OTU4_service _____________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'POST', url = '/rests/operations/org-openroadm-service:service-create' 11:55:59 body = '{"input": {"sdnc-request-header": {"request-id": "request-1", "rpc-action": "service-create", "request-system-id": "a...shelf": "00"}, "index": 0}], "optic-type": "gray"}, "due-date": "2018-06-15T00:00:01Z", "operator-contact": "pw1234"}}' 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '1994', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/org-openroadm-service:service-create', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'POST', url = '/rests/operations/org-openroadm-service:service-create' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/org-openroadm-service:service-create (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_31_create_OCH_OTU4_service(self): 11:55:59 > response = test_utils.transportpce_api_rpc_request( 11:55:59 'org-openroadm-service', 'service-create', 11:55:59 self.cr_serv_input_data) 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:476: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:59 response = post_request(url, data) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:142: in post_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/org-openroadm-service:service-create (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_31_create_OCH_OTU4_service 11:55:59 _____________ TransportTapitesting.test_32_check_tapi_topology_T0 ______________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:59 body = '{"input": {"topology-id": "747c670e-7a07-3dab-b379-5b1cd17402a3"}}' 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '66', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/tapi-topology:get-topology-details', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_32_check_tapi_topology_T0(self): 11:55:59 self.tapi_topo["topology-id"] = test_utils.T0_MULTILAYER_TOPO_UUID 11:55:59 > response = test_utils.transportpce_api_rpc_request( 11:55:59 'tapi-topology', 'get-topology-details', self.tapi_topo) 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:486: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:59 response = post_request(url, data) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:142: in post_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_32_check_tapi_topology_T0 11:55:59 _______________ TransportTapitesting.test_33_create_ODU4_service _______________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'POST', url = '/rests/operations/org-openroadm-service:service-create' 11:55:59 body = '{"input": {"sdnc-request-header": {"request-id": "request-1", "rpc-action": "service-create", "request-system-id": "a...vice-rate": "org-openroadm-otn-common-types:ODU4"}, "due-date": "2018-06-15T00:00:01Z", "operator-contact": "pw1234"}}' 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '1990', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/org-openroadm-service:service-create', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'POST', url = '/rests/operations/org-openroadm-service:service-create' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/org-openroadm-service:service-create (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_33_create_ODU4_service(self): 11:55:59 self.cr_serv_input_data["service-name"] = "service1-ODU4" 11:55:59 self.cr_serv_input_data["service-a-end"]["service-format"] = "ODU" 11:55:59 del self.cr_serv_input_data["service-a-end"]["otu-service-rate"] 11:55:59 self.cr_serv_input_data["service-a-end"]["odu-service-rate"] = "org-openroadm-otn-common-types:ODU4" 11:55:59 self.cr_serv_input_data["service-z-end"]["service-format"] = "ODU" 11:55:59 del self.cr_serv_input_data["service-z-end"]["otu-service-rate"] 11:55:59 self.cr_serv_input_data["service-z-end"]["odu-service-rate"] = "org-openroadm-otn-common-types:ODU4" 11:55:59 11:55:59 > response = test_utils.transportpce_api_rpc_request( 11:55:59 'org-openroadm-service', 'service-create', 11:55:59 self.cr_serv_input_data) 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:515: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:59 response = post_request(url, data) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:142: in post_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/org-openroadm-service:service-create (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_33_create_ODU4_service 11:55:59 _____________ TransportTapitesting.test_34_check_tapi_topology_T0 ______________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:59 body = '{"input": {"topology-id": "747c670e-7a07-3dab-b379-5b1cd17402a3"}}' 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '66', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/tapi-topology:get-topology-details', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_34_check_tapi_topology_T0(self): 11:55:59 self.tapi_topo["topology-id"] = test_utils.T0_MULTILAYER_TOPO_UUID 11:55:59 > response = test_utils.transportpce_api_rpc_request( 11:55:59 'tapi-topology', 'get-topology-details', self.tapi_topo) 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:525: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:59 response = post_request(url, data) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:142: in post_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_34_check_tapi_topology_T0 11:55:59 ________ TransportTapitesting.test_35_connect_sprda_2_n2_to_roadma_pp3 _________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'POST' 11:55:59 url = '/rests/operations/transportpce-networkutils:init-xpdr-rdm-links' 11:55:59 body = '{"input": {"links-input": {"xpdr-node": "SPDR-SA1", "xpdr-num": "2", "network-num": "2", "rdm-node": "ROADM-A1", "srg-num": "1", "termination-point-num": "SRG1-PP3-TXRX"}}}' 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '172', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/transportpce-networkutils:init-xpdr-rdm-links', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'POST' 11:55:59 url = '/rests/operations/transportpce-networkutils:init-xpdr-rdm-links' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/transportpce-networkutils:init-xpdr-rdm-links (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_35_connect_sprda_2_n2_to_roadma_pp3(self): 11:55:59 > response = test_utils.transportpce_api_rpc_request( 11:55:59 'transportpce-networkutils', 'init-xpdr-rdm-links', 11:55:59 {'links-input': {'xpdr-node': 'SPDR-SA1', 'xpdr-num': '2', 'network-num': '2', 11:55:59 'rdm-node': 'ROADM-A1', 'srg-num': '1', 'termination-point-num': 'SRG1-PP3-TXRX'}}) 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:549: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:59 response = post_request(url, data) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:142: in post_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/transportpce-networkutils:init-xpdr-rdm-links (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_35_connect_sprda_2_n2_to_roadma_pp3 11:55:59 ________ TransportTapitesting.test_36_connect_roadma_pp3_to_spdra_2_n2 _________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'POST' 11:55:59 url = '/rests/operations/transportpce-networkutils:init-rdm-xpdr-links' 11:55:59 body = '{"input": {"links-input": {"xpdr-node": "SPDR-SA1", "xpdr-num": "2", "network-num": "2", "rdm-node": "ROADM-A1", "srg-num": "1", "termination-point-num": "SRG1-PP3-TXRX"}}}' 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '172', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/transportpce-networkutils:init-rdm-xpdr-links', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'POST' 11:55:59 url = '/rests/operations/transportpce-networkutils:init-rdm-xpdr-links' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/transportpce-networkutils:init-rdm-xpdr-links (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_36_connect_roadma_pp3_to_spdra_2_n2(self): 11:55:59 > response = test_utils.transportpce_api_rpc_request( 11:55:59 'transportpce-networkutils', 'init-rdm-xpdr-links', 11:55:59 {'links-input': {'xpdr-node': 'SPDR-SA1', 'xpdr-num': '2', 'network-num': '2', 11:55:59 'rdm-node': 'ROADM-A1', 'srg-num': '1', 'termination-point-num': 'SRG1-PP3-TXRX'}}) 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:559: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:59 response = post_request(url, data) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:142: in post_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/transportpce-networkutils:init-rdm-xpdr-links (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_36_connect_roadma_pp3_to_spdra_2_n2 11:55:59 _____________ TransportTapitesting.test_37_check_tapi_topology_T0 ______________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:59 body = '{"input": {"topology-id": "747c670e-7a07-3dab-b379-5b1cd17402a3"}}' 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '66', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/tapi-topology:get-topology-details', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_37_check_tapi_topology_T0(self): 11:55:59 self.tapi_topo["topology-id"] = test_utils.T0_MULTILAYER_TOPO_UUID 11:55:59 > response = test_utils.transportpce_api_rpc_request( 11:55:59 'tapi-topology', 'get-topology-details', self.tapi_topo) 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:570: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:59 response = post_request(url, data) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:142: in post_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_37_check_tapi_topology_T0 11:55:59 _______________ TransportTapitesting.test_38_delete_ODU4_service _______________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'POST', url = '/rests/operations/org-openroadm-service:service-delete' 11:55:59 body = '{"input": {"sdnc-request-header": {"request-id": "e3028bae-a90f-4ddd-a83f-cf224eba0e58", "rpc-action": "service-delet...85/NotificationServer/notify"}, "service-delete-req-info": {"service-name": "service1-ODU4", "tail-retention": "no"}}}' 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '311', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/org-openroadm-service:service-delete', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'POST', url = '/rests/operations/org-openroadm-service:service-delete' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/org-openroadm-service:service-delete (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_38_delete_ODU4_service(self): 11:55:59 self.del_serv_input_data["service-delete-req-info"]["service-name"] = "service1-ODU4" 11:55:59 > response = test_utils.transportpce_api_rpc_request( 11:55:59 'org-openroadm-service', 'service-delete', 11:55:59 self.del_serv_input_data) 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:588: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:59 response = post_request(url, data) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:142: in post_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/org-openroadm-service:service-delete (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_38_delete_ODU4_service 11:55:59 _____________ TransportTapitesting.test_39_delete_OCH_OTU4_service _____________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'POST', url = '/rests/operations/org-openroadm-service:service-delete' 11:55:59 body = '{"input": {"sdnc-request-header": {"request-id": "e3028bae-a90f-4ddd-a83f-cf224eba0e58", "rpc-action": "service-delet...otificationServer/notify"}, "service-delete-req-info": {"service-name": "service1-OCH-OTU4", "tail-retention": "no"}}}' 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '315', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/org-openroadm-service:service-delete', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'POST', url = '/rests/operations/org-openroadm-service:service-delete' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/org-openroadm-service:service-delete (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_39_delete_OCH_OTU4_service(self): 11:55:59 self.del_serv_input_data["service-delete-req-info"]["service-name"] = "service1-OCH-OTU4" 11:55:59 > response = test_utils.transportpce_api_rpc_request( 11:55:59 'org-openroadm-service', 'service-delete', 11:55:59 self.del_serv_input_data) 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:598: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:59 response = post_request(url, data) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:142: in post_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/org-openroadm-service:service-delete (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_39_delete_OCH_OTU4_service 11:55:59 _____________ TransportTapitesting.test_40_check_tapi_topology_T0 ______________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:59 body = '{"input": {"topology-id": "747c670e-7a07-3dab-b379-5b1cd17402a3"}}' 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '66', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/tapi-topology:get-topology-details', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_40_check_tapi_topology_T0(self): 11:55:59 self.tapi_topo["topology-id"] = test_utils.T0_MULTILAYER_TOPO_UUID 11:55:59 > response = test_utils.transportpce_api_rpc_request( 11:55:59 'tapi-topology', 'get-topology-details', self.tapi_topo) 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:608: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:59 response = post_request(url, data) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:142: in post_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_40_check_tapi_topology_T0 11:55:59 _________ TransportTapitesting.test_41_disconnect_xponders_from_roadm __________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'GET' 11:55:59 url = '/rests/data/ietf-network:networks/network=openroadm-topology?content=config' 11:55:59 body = None 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/ietf-network:networks/network=openroadm-topology', query='content=config', fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'GET' 11:55:59 url = '/rests/data/ietf-network:networks/network=openroadm-topology?content=config' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-topology?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_41_disconnect_xponders_from_roadm(self): 11:55:59 > response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:619: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:540: in get_ietf_network_request 11:55:59 response = get_request(url[RESTCONF_VERSION].format(*format_args)) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:116: in get_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-topology?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_41_disconnect_xponders_from_roadm 11:55:59 _____________ TransportTapitesting.test_42_check_tapi_topology_T0 ______________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:59 body = '{"input": {"topology-id": "747c670e-7a07-3dab-b379-5b1cd17402a3"}}' 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '66', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/tapi-topology:get-topology-details', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_42_check_tapi_topology_T0(self): 11:55:59 self.tapi_topo["topology-id"] = test_utils.T0_MULTILAYER_TOPO_UUID 11:55:59 > response = test_utils.transportpce_api_rpc_request( 11:55:59 'tapi-topology', 'get-topology-details', self.tapi_topo) 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:630: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:59 response = post_request(url, data) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:142: in post_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_42_check_tapi_topology_T0 11:55:59 _____________ TransportTapitesting.test_43_get_tapi_topology_T100G _____________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:59 body = '{"input": {"topology-id": "cf51c729-3699-308a-a7d0-594c6a62ebbb"}}' 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '66', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/tapi-topology:get-topology-details', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_43_get_tapi_topology_T100G(self): 11:55:59 self.tapi_topo["topology-id"] = test_utils.T100GE_UUID 11:55:59 > response = test_utils.transportpce_api_rpc_request( 11:55:59 'tapi-topology', 'get-topology-details', self.tapi_topo) 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:644: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:59 response = post_request(url, data) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:142: in post_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_43_get_tapi_topology_T100G 11:55:59 ________________ TransportTapitesting.test_44_disconnect_roadma ________________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'DELETE' 11:55:59 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-A1' 11:55:59 body = None 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-A1', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'DELETE' 11:55:59 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-A1' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-A1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_44_disconnect_roadma(self): 11:55:59 > response = test_utils.unmount_device("ROADM-A1") 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:653: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:379: in unmount_device 11:55:59 response = delete_request(url[RESTCONF_VERSION].format('{}', node)) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:133: in delete_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-A1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_44_disconnect_roadma 11:55:59 ________________ TransportTapitesting.test_45_disconnect_roadmc ________________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'DELETE' 11:55:59 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-C1' 11:55:59 body = None 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-C1', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'DELETE' 11:55:59 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-C1' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-C1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_45_disconnect_roadmc(self): 11:55:59 > response = test_utils.unmount_device("ROADM-C1") 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:657: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:379: in unmount_device 11:55:59 response = delete_request(url[RESTCONF_VERSION].format('{}', node)) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:133: in delete_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-C1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_45_disconnect_roadmc 11:55:59 ________________ TransportTapitesting.test_46_check_tapi_topos _________________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:59 body = '{"input": {"topology-id": "cf51c729-3699-308a-a7d0-594c6a62ebbb"}}' 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '66', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/tapi-topology:get-topology-details', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'POST', url = '/rests/operations/tapi-topology:get-topology-details' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_46_check_tapi_topos(self): 11:55:59 > self.test_01_get_tapi_topology_T100G() 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:661: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:182: in test_01_get_tapi_topology_T100G 11:55:59 response = test_utils.transportpce_api_rpc_request( 11:55:59 transportpce_tests/common/test_utils.py:729: in transportpce_api_rpc_request 11:55:59 response = post_request(url, data) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:142: in post_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/operations/tapi-topology:get-topology-details (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_46_check_tapi_topos 11:55:59 ________________ TransportTapitesting.test_47_disconnect_xpdra _________________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'DELETE' 11:55:59 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-A1' 11:55:59 body = None 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-A1', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'DELETE' 11:55:59 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-A1' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-A1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_47_disconnect_xpdra(self): 11:55:59 > response = test_utils.unmount_device("XPDR-A1") 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:665: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:379: in unmount_device 11:55:59 response = delete_request(url[RESTCONF_VERSION].format('{}', node)) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:133: in delete_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-A1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_47_disconnect_xpdra 11:55:59 ________________ TransportTapitesting.test_48_disconnect_xpdrc _________________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'DELETE' 11:55:59 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-C1' 11:55:59 body = None 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-C1', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'DELETE' 11:55:59 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-C1' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-C1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_48_disconnect_xpdrc(self): 11:55:59 > response = test_utils.unmount_device("XPDR-C1") 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:670: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:379: in unmount_device 11:55:59 response = delete_request(url[RESTCONF_VERSION].format('{}', node)) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:133: in delete_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-C1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_48_disconnect_xpdrc 11:55:59 _______________ TransportTapitesting.test_49_disconnect_spdr_sa1 _______________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'DELETE' 11:55:59 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SA1' 11:55:59 body = None 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SA1', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'DELETE' 11:55:59 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SA1' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SA1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_49_disconnect_spdr_sa1(self): 11:55:59 > response = test_utils.unmount_device("SPDR-SA1") 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:675: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:379: in unmount_device 11:55:59 response = delete_request(url[RESTCONF_VERSION].format('{}', node)) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:133: in delete_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SA1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_49_disconnect_spdr_sa1 11:55:59 _______________ TransportTapitesting.test_50_disconnect_spdr_sc1 _______________ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 > sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:198: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 11:55:59 raise err 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 address = ('localhost', 8183), timeout = 30, source_address = None 11:55:59 socket_options = [(6, 1, 1)] 11:55:59 11:55:59 def create_connection( 11:55:59 address: tuple[str, int], 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 source_address: tuple[str, int] | None = None, 11:55:59 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 11:55:59 ) -> socket.socket: 11:55:59 """Connect to *address* and return the socket object. 11:55:59 11:55:59 Convenience function. Connect to *address* (a 2-tuple ``(host, 11:55:59 port)``) and return the socket object. Passing the optional 11:55:59 *timeout* parameter will set the timeout on the socket instance 11:55:59 before attempting to connect. If no *timeout* is supplied, the 11:55:59 global default timeout setting returned by :func:`socket.getdefaulttimeout` 11:55:59 is used. If *source_address* is set it must be a tuple of (host, port) 11:55:59 for the socket to bind as a source address before making the connection. 11:55:59 An host of '' or port 0 tells the OS to use the default. 11:55:59 """ 11:55:59 11:55:59 host, port = address 11:55:59 if host.startswith("["): 11:55:59 host = host.strip("[]") 11:55:59 err = None 11:55:59 11:55:59 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 11:55:59 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 11:55:59 # The original create_connection function always returns all records. 11:55:59 family = allowed_gai_family() 11:55:59 11:55:59 try: 11:55:59 host.encode("idna") 11:55:59 except UnicodeError: 11:55:59 raise LocationParseError(f"'{host}', label empty or too long") from None 11:55:59 11:55:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 11:55:59 af, socktype, proto, canonname, sa = res 11:55:59 sock = None 11:55:59 try: 11:55:59 sock = socket.socket(af, socktype, proto) 11:55:59 11:55:59 # If provided, set socket level options before connecting. 11:55:59 _set_socket_options(sock, socket_options) 11:55:59 11:55:59 if timeout is not _DEFAULT_TIMEOUT: 11:55:59 sock.settimeout(timeout) 11:55:59 if source_address: 11:55:59 sock.bind(source_address) 11:55:59 > sock.connect(sa) 11:55:59 E ConnectionRefusedError: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 method = 'DELETE' 11:55:59 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SC1' 11:55:59 body = None 11:55:59 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 11:55:59 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 redirect = False, assert_same_host = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 11:55:59 release_conn = False, chunked = False, body_pos = None, preload_content = False 11:55:59 decode_content = False, response_kw = {} 11:55:59 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SC1', query=None, fragment=None) 11:55:59 destination_scheme = None, conn = None, release_this_conn = True 11:55:59 http_tunnel_required = False, err = None, clean_exit = False 11:55:59 11:55:59 def urlopen( # type: ignore[override] 11:55:59 self, 11:55:59 method: str, 11:55:59 url: str, 11:55:59 body: _TYPE_BODY | None = None, 11:55:59 headers: typing.Mapping[str, str] | None = None, 11:55:59 retries: Retry | bool | int | None = None, 11:55:59 redirect: bool = True, 11:55:59 assert_same_host: bool = True, 11:55:59 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 11:55:59 pool_timeout: int | None = None, 11:55:59 release_conn: bool | None = None, 11:55:59 chunked: bool = False, 11:55:59 body_pos: _TYPE_BODY_POSITION | None = None, 11:55:59 preload_content: bool = True, 11:55:59 decode_content: bool = True, 11:55:59 **response_kw: typing.Any, 11:55:59 ) -> BaseHTTPResponse: 11:55:59 """ 11:55:59 Get a connection from the pool and perform an HTTP request. This is the 11:55:59 lowest level call for making a request, so you'll need to specify all 11:55:59 the raw details. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 More commonly, it's appropriate to use a convenience method 11:55:59 such as :meth:`request`. 11:55:59 11:55:59 .. note:: 11:55:59 11:55:59 `release_conn` will only behave as expected if 11:55:59 `preload_content=False` because we want to make 11:55:59 `preload_content=False` the default behaviour someday soon without 11:55:59 breaking backwards compatibility. 11:55:59 11:55:59 :param method: 11:55:59 HTTP request method (such as GET, POST, PUT, etc.) 11:55:59 11:55:59 :param url: 11:55:59 The URL to perform the request on. 11:55:59 11:55:59 :param body: 11:55:59 Data to send in the request body, either :class:`str`, :class:`bytes`, 11:55:59 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 11:55:59 11:55:59 :param headers: 11:55:59 Dictionary of custom headers to send, such as User-Agent, 11:55:59 If-None-Match, etc. If None, pool headers are used. If provided, 11:55:59 these headers completely replace any pool-specific headers. 11:55:59 11:55:59 :param retries: 11:55:59 Configure the number of retries to allow before raising a 11:55:59 :class:`~urllib3.exceptions.MaxRetryError` exception. 11:55:59 11:55:59 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 11:55:59 :class:`~urllib3.util.retry.Retry` object for fine-grained control 11:55:59 over different types of retries. 11:55:59 Pass an integer number to retry connection errors that many times, 11:55:59 but no other types of errors. Pass zero to never retry. 11:55:59 11:55:59 If ``False``, then retries are disabled and any exception is raised 11:55:59 immediately. Also, instead of raising a MaxRetryError on redirects, 11:55:59 the redirect response will be returned. 11:55:59 11:55:59 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 11:55:59 11:55:59 :param redirect: 11:55:59 If True, automatically handle redirects (status codes 301, 302, 11:55:59 303, 307, 308). Each redirect counts as a retry. Disabling retries 11:55:59 will disable redirect, too. 11:55:59 11:55:59 :param assert_same_host: 11:55:59 If ``True``, will make sure that the host of the pool requests is 11:55:59 consistent else will raise HostChangedError. When ``False``, you can 11:55:59 use the pool on an HTTP proxy and request foreign hosts. 11:55:59 11:55:59 :param timeout: 11:55:59 If specified, overrides the default timeout for this one 11:55:59 request. It may be a float (in seconds) or an instance of 11:55:59 :class:`urllib3.util.Timeout`. 11:55:59 11:55:59 :param pool_timeout: 11:55:59 If set and the pool is set to block=True, then this method will 11:55:59 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 11:55:59 connection is available within the time period. 11:55:59 11:55:59 :param bool preload_content: 11:55:59 If True, the response's body will be preloaded into memory. 11:55:59 11:55:59 :param bool decode_content: 11:55:59 If True, will attempt to decode the body based on the 11:55:59 'content-encoding' header. 11:55:59 11:55:59 :param release_conn: 11:55:59 If False, then the urlopen call will not release the connection 11:55:59 back into the pool once a response is received (but will release if 11:55:59 you read the entire contents of the response such as when 11:55:59 `preload_content=True`). This is useful if you're not preloading 11:55:59 the response's content immediately. You will need to call 11:55:59 ``r.release_conn()`` on the response ``r`` to return the connection 11:55:59 back into the pool. If None, it takes the value of ``preload_content`` 11:55:59 which defaults to ``True``. 11:55:59 11:55:59 :param bool chunked: 11:55:59 If True, urllib3 will send the body using chunked transfer 11:55:59 encoding. Otherwise, urllib3 will send the body using the standard 11:55:59 content-length form. Defaults to False. 11:55:59 11:55:59 :param int body_pos: 11:55:59 Position to seek to in file-like body in the event of a retry or 11:55:59 redirect. Typically this won't need to be set because urllib3 will 11:55:59 auto-populate the value when needed. 11:55:59 """ 11:55:59 parsed_url = parse_url(url) 11:55:59 destination_scheme = parsed_url.scheme 11:55:59 11:55:59 if headers is None: 11:55:59 headers = self.headers 11:55:59 11:55:59 if not isinstance(retries, Retry): 11:55:59 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 11:55:59 11:55:59 if release_conn is None: 11:55:59 release_conn = preload_content 11:55:59 11:55:59 # Check host 11:55:59 if assert_same_host and not self.is_same_host(url): 11:55:59 raise HostChangedError(self, url, retries) 11:55:59 11:55:59 # Ensure that the URL we're connecting to is properly encoded 11:55:59 if url.startswith("/"): 11:55:59 url = to_str(_encode_target(url)) 11:55:59 else: 11:55:59 url = to_str(parsed_url.url) 11:55:59 11:55:59 conn = None 11:55:59 11:55:59 # Track whether `conn` needs to be released before 11:55:59 # returning/raising/recursing. Update this variable if necessary, and 11:55:59 # leave `release_conn` constant throughout the function. That way, if 11:55:59 # the function recurses, the original value of `release_conn` will be 11:55:59 # passed down into the recursive call, and its value will be respected. 11:55:59 # 11:55:59 # See issue #651 [1] for details. 11:55:59 # 11:55:59 # [1] 11:55:59 release_this_conn = release_conn 11:55:59 11:55:59 http_tunnel_required = connection_requires_http_tunnel( 11:55:59 self.proxy, self.proxy_config, destination_scheme 11:55:59 ) 11:55:59 11:55:59 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 11:55:59 # have to copy the headers dict so we can safely change it without those 11:55:59 # changes being reflected in anyone else's copy. 11:55:59 if not http_tunnel_required: 11:55:59 headers = headers.copy() # type: ignore[attr-defined] 11:55:59 headers.update(self.proxy_headers) # type: ignore[union-attr] 11:55:59 11:55:59 # Must keep the exception bound to a separate variable or else Python 3 11:55:59 # complains about UnboundLocalError. 11:55:59 err = None 11:55:59 11:55:59 # Keep track of whether we cleanly exited the except block. This 11:55:59 # ensures we do proper cleanup in finally. 11:55:59 clean_exit = False 11:55:59 11:55:59 # Rewind body position, if needed. Record current position 11:55:59 # for future rewinds in the event of a redirect/retry. 11:55:59 body_pos = set_file_position(body, body_pos) 11:55:59 11:55:59 try: 11:55:59 # Request a connection from the queue. 11:55:59 timeout_obj = self._get_timeout(timeout) 11:55:59 conn = self._get_conn(timeout=pool_timeout) 11:55:59 11:55:59 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 11:55:59 11:55:59 # Is this a closed/new connection that requires CONNECT tunnelling? 11:55:59 if self.proxy is not None and http_tunnel_required and conn.is_closed: 11:55:59 try: 11:55:59 self._prepare_proxy(conn) 11:55:59 except (BaseSSLError, OSError, SocketTimeout) as e: 11:55:59 self._raise_timeout( 11:55:59 err=e, url=self.proxy.url, timeout_value=conn.timeout 11:55:59 ) 11:55:59 raise 11:55:59 11:55:59 # If we're going to release the connection in ``finally:``, then 11:55:59 # the response doesn't need to know about the connection. Otherwise 11:55:59 # it will also try to release it and we'll have a double-release 11:55:59 # mess. 11:55:59 response_conn = conn if not release_conn else None 11:55:59 11:55:59 # Make the request on the HTTPConnection object 11:55:59 > response = self._make_request( 11:55:59 conn, 11:55:59 method, 11:55:59 url, 11:55:59 timeout=timeout_obj, 11:55:59 body=body, 11:55:59 headers=headers, 11:55:59 chunked=chunked, 11:55:59 retries=retries, 11:55:59 response_conn=response_conn, 11:55:59 preload_content=preload_content, 11:55:59 decode_content=decode_content, 11:55:59 **response_kw, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 11:55:59 conn.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:494: in request 11:55:59 self.endheaders() 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 11:55:59 self._send_output(message_body, encode_chunked=encode_chunked) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 11:55:59 self.send(msg) 11:55:59 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 11:55:59 self.connect() 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 11:55:59 self.sock = self._new_conn() 11:55:59 ^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 11:55:59 def _new_conn(self) -> socket.socket: 11:55:59 """Establish a socket connection and set nodelay settings on it. 11:55:59 11:55:59 :return: New socket connection. 11:55:59 """ 11:55:59 try: 11:55:59 sock = connection.create_connection( 11:55:59 (self._dns_host, self.port), 11:55:59 self.timeout, 11:55:59 source_address=self.source_address, 11:55:59 socket_options=self.socket_options, 11:55:59 ) 11:55:59 except socket.gaierror as e: 11:55:59 raise NameResolutionError(self.host, self, e) from e 11:55:59 except SocketTimeout as e: 11:55:59 raise ConnectTimeoutError( 11:55:59 self, 11:55:59 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 11:55:59 ) from e 11:55:59 11:55:59 except OSError as e: 11:55:59 > raise NewConnectionError( 11:55:59 self, f"Failed to establish a new connection: {e}" 11:55:59 ) from e 11:55:59 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 11:55:59 11:55:59 The above exception was the direct cause of the following exception: 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 > resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:667: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 11:55:59 retries = retries.increment( 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 11:55:59 method = 'DELETE' 11:55:59 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SC1' 11:55:59 response = None 11:55:59 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 11:55:59 _pool = 11:55:59 _stacktrace = 11:55:59 11:55:59 def increment( 11:55:59 self, 11:55:59 method: str | None = None, 11:55:59 url: str | None = None, 11:55:59 response: BaseHTTPResponse | None = None, 11:55:59 error: Exception | None = None, 11:55:59 _pool: ConnectionPool | None = None, 11:55:59 _stacktrace: TracebackType | None = None, 11:55:59 ) -> Self: 11:55:59 """Return a new Retry object with incremented retry counters. 11:55:59 11:55:59 :param response: A response object, or None, if the server did not 11:55:59 return a response. 11:55:59 :type response: :class:`~urllib3.response.BaseHTTPResponse` 11:55:59 :param Exception error: An error encountered during the request, or 11:55:59 None if the response was received successfully. 11:55:59 11:55:59 :return: A new ``Retry`` object. 11:55:59 """ 11:55:59 if self.total is False and error: 11:55:59 # Disabled, indicate to re-raise the error. 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 11:55:59 total = self.total 11:55:59 if total is not None: 11:55:59 total -= 1 11:55:59 11:55:59 connect = self.connect 11:55:59 read = self.read 11:55:59 redirect = self.redirect 11:55:59 status_count = self.status 11:55:59 other = self.other 11:55:59 cause = "unknown" 11:55:59 status = None 11:55:59 redirect_location = None 11:55:59 11:55:59 if error and self._is_connection_error(error): 11:55:59 # Connect retry? 11:55:59 if connect is False: 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif connect is not None: 11:55:59 connect -= 1 11:55:59 11:55:59 elif error and self._is_read_error(error): 11:55:59 # Read retry? 11:55:59 if read is False or method is None or not self._is_method_retryable(method): 11:55:59 raise reraise(type(error), error, _stacktrace) 11:55:59 elif read is not None: 11:55:59 read -= 1 11:55:59 11:55:59 elif error: 11:55:59 # Other retry? 11:55:59 if other is not None: 11:55:59 other -= 1 11:55:59 11:55:59 elif response and response.get_redirect_location(): 11:55:59 # Redirect retry? 11:55:59 if redirect is not None: 11:55:59 redirect -= 1 11:55:59 cause = "too many redirects" 11:55:59 response_redirect_location = response.get_redirect_location() 11:55:59 if response_redirect_location: 11:55:59 redirect_location = response_redirect_location 11:55:59 status = response.status 11:55:59 11:55:59 else: 11:55:59 # Incrementing because of a server error like a 500 in 11:55:59 # status_forcelist and the given method is in the allowed_methods 11:55:59 cause = ResponseError.GENERIC_ERROR 11:55:59 if response and response.status: 11:55:59 if status_count is not None: 11:55:59 status_count -= 1 11:55:59 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 11:55:59 status = response.status 11:55:59 11:55:59 history = self.history + ( 11:55:59 RequestHistory(method, url, error, status, redirect_location), 11:55:59 ) 11:55:59 11:55:59 new_retry = self.new( 11:55:59 total=total, 11:55:59 connect=connect, 11:55:59 read=read, 11:55:59 redirect=redirect, 11:55:59 status=status_count, 11:55:59 other=other, 11:55:59 history=history, 11:55:59 ) 11:55:59 11:55:59 if new_retry.is_exhausted(): 11:55:59 reason = error or ResponseError(cause) 11:55:59 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SC1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 11:55:59 11:55:59 During handling of the above exception, another exception occurred: 11:55:59 11:55:59 self = 11:55:59 11:55:59 def test_50_disconnect_spdr_sc1(self): 11:55:59 > response = test_utils.unmount_device("SPDR-SC1") 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 11:55:59 transportpce_tests/tapi/test01_abstracted_topology.py:680: 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 transportpce_tests/common/test_utils.py:379: in unmount_device 11:55:59 response = delete_request(url[RESTCONF_VERSION].format('{}', node)) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 transportpce_tests/common/test_utils.py:133: in delete_request 11:55:59 return requests.request( 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/api.py:59: in request 11:55:59 return session.request(method=method, url=url, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:589: in request 11:55:59 resp = self.send(prep, **send_kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/sessions.py:703: in send 11:55:59 r = adapter.send(request, **kwargs) 11:55:59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11:55:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 11:55:59 11:55:59 self = 11:55:59 request = , stream = False 11:55:59 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 11:55:59 proxies = OrderedDict() 11:55:59 11:55:59 def send( 11:55:59 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 11:55:59 ): 11:55:59 """Sends PreparedRequest object. Returns Response object. 11:55:59 11:55:59 :param request: The :class:`PreparedRequest ` being sent. 11:55:59 :param stream: (optional) Whether to stream the request content. 11:55:59 :param timeout: (optional) How long to wait for the server to send 11:55:59 data before giving up, as a float, or a :ref:`(connect timeout, 11:55:59 read timeout) ` tuple. 11:55:59 :type timeout: float or tuple or urllib3 Timeout object 11:55:59 :param verify: (optional) Either a boolean, in which case it controls whether 11:55:59 we verify the server's TLS certificate, or a string, in which case it 11:55:59 must be a path to a CA bundle to use 11:55:59 :param cert: (optional) Any user-provided SSL certificate to be trusted. 11:55:59 :param proxies: (optional) The proxies dictionary to apply to the request. 11:55:59 :rtype: requests.Response 11:55:59 """ 11:55:59 11:55:59 try: 11:55:59 conn = self.get_connection_with_tls_context( 11:55:59 request, verify, proxies=proxies, cert=cert 11:55:59 ) 11:55:59 except LocationValueError as e: 11:55:59 raise InvalidURL(e, request=request) 11:55:59 11:55:59 self.cert_verify(conn, request.url, verify, cert) 11:55:59 url = self.request_url(request, proxies) 11:55:59 self.add_headers( 11:55:59 request, 11:55:59 stream=stream, 11:55:59 timeout=timeout, 11:55:59 verify=verify, 11:55:59 cert=cert, 11:55:59 proxies=proxies, 11:55:59 ) 11:55:59 11:55:59 chunked = not (request.body is None or "Content-Length" in request.headers) 11:55:59 11:55:59 if isinstance(timeout, tuple): 11:55:59 try: 11:55:59 connect, read = timeout 11:55:59 timeout = TimeoutSauce(connect=connect, read=read) 11:55:59 except ValueError: 11:55:59 raise ValueError( 11:55:59 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 11:55:59 f"or a single float to set both timeouts to the same value." 11:55:59 ) 11:55:59 elif isinstance(timeout, TimeoutSauce): 11:55:59 pass 11:55:59 else: 11:55:59 timeout = TimeoutSauce(connect=timeout, read=timeout) 11:55:59 11:55:59 try: 11:55:59 resp = conn.urlopen( 11:55:59 method=request.method, 11:55:59 url=url, 11:55:59 body=request.body, 11:55:59 headers=request.headers, 11:55:59 redirect=False, 11:55:59 assert_same_host=False, 11:55:59 preload_content=False, 11:55:59 decode_content=False, 11:55:59 retries=self.max_retries, 11:55:59 timeout=timeout, 11:55:59 chunked=chunked, 11:55:59 ) 11:55:59 11:55:59 except (ProtocolError, OSError) as err: 11:55:59 raise ConnectionError(err, request=request) 11:55:59 11:55:59 except MaxRetryError as e: 11:55:59 if isinstance(e.reason, ConnectTimeoutError): 11:55:59 # TODO: Remove this in 3.0.0: see #2811 11:55:59 if not isinstance(e.reason, NewConnectionError): 11:55:59 raise ConnectTimeout(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, ResponseError): 11:55:59 raise RetryError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _ProxyError): 11:55:59 raise ProxyError(e, request=request) 11:55:59 11:55:59 if isinstance(e.reason, _SSLError): 11:55:59 # This branch is for urllib3 v1.22 and later. 11:55:59 raise SSLError(e, request=request) 11:55:59 11:55:59 > raise ConnectionError(e, request=request) 11:55:59 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SC1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 11:55:59 11:55:59 ../.tox/tests_tapi/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 11:55:59 ----------------------------- Captured stdout call ----------------------------- 11:55:59 execution of test_50_disconnect_spdr_sc1 11:55:59 =========================== short test summary info ============================ 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_01_get_tapi_topology_T100G 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_02_get_tapi_topology_T0 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_03_connect_rdmb 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_04_check_tapi_topos 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_05_disconnect_roadmb 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_06_connect_xpdra 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_07_check_tapi_topos 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_08_connect_rdma 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_09_connect_rdmc 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_10_check_tapi_topos 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_11_connect_xpdra_n1_to_roadma_pp1 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_12_connect_roadma_pp1_to_xpdra_n1 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_13_check_tapi_topology_T100G 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_14_check_tapi_topology_T0 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_15_connect_xpdrc 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_16_connect_xpdrc_n1_to_roadmc_pp1 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_17_connect_roadmc_pp1_to_xpdrc_n1 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_18_check_tapi_topology_T100G 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_19_check_tapi_topology_T0 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_20_connect_spdr_sa1 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_21_connect_spdr_sc1 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_22_check_tapi_topology_T100G 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_23_check_tapi_topology_T0 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_24_connect_sprda_n1_to_roadma_pp2 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_25_connect_roadma_pp2_to_spdra_n1 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_26_connect_sprdc_n1_to_roadmc_pp2 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_27_connect_roadmc_pp2_to_spdrc_n1 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_28_check_tapi_topology_T100G 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_29_check_tapi_topology_T0 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_30_add_oms_attributes 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_31_create_OCH_OTU4_service 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_32_check_tapi_topology_T0 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_33_create_ODU4_service 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_34_check_tapi_topology_T0 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_35_connect_sprda_2_n2_to_roadma_pp3 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_36_connect_roadma_pp3_to_spdra_2_n2 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_37_check_tapi_topology_T0 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_38_delete_ODU4_service 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_39_delete_OCH_OTU4_service 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_40_check_tapi_topology_T0 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_41_disconnect_xponders_from_roadm 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_42_check_tapi_topology_T0 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_43_get_tapi_topology_T100G 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_44_disconnect_roadma 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_45_disconnect_roadmc 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_46_check_tapi_topos 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_47_disconnect_xpdra 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_48_disconnect_xpdrc 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_49_disconnect_spdr_sa1 11:55:59 FAILED transportpce_tests/tapi/test01_abstracted_topology.py::TransportTapitesting::test_50_disconnect_spdr_sc1 11:55:59 50 failed, 1 passed in 142.07s (0:02:22) 11:55:59 tests_tapi: exit 1 (142.43 seconds) /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh tapi pid=23260 11:55:59 tests_tapi: FAIL ✖ in 2 minutes 29.07 seconds 11:55:59 tests71: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 11:56:05 tests71: freeze> python -m pip freeze --all 11:56:06 tests71: bcrypt==4.3.0,certifi==2025.6.15,cffi==1.17.1,charset-normalizer==3.4.2,cryptography==45.0.5,dict2xml==1.7.6,idna==3.10,iniconfig==2.1.0,lxml==5.4.0,netconf-client==3.2.0,packaging==25.0,paramiko==3.5.1,pip==25.1.1,pluggy==1.6.0,psutil==7.0.0,pycparser==2.22,Pygments==2.19.2,PyNaCl==1.5.0,pytest==8.4.1,requests==2.32.4,setuptools==80.3.1,urllib3==2.5.0 11:56:06 tests71: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 7.1 11:56:06 using environment variables from ./karaf71.env 11:56:06 pytest -q transportpce_tests/7.1/test01_portmapping.py 11:56:37 ............ [100%] 11:56:50 12 passed in 43.83s 11:56:50 pytest -q transportpce_tests/7.1/test02_otn_renderer.py 11:57:15 .............................................................. [100%] 11:59:25 62 passed in 155.35s (0:02:35) 11:59:25 pytest -q transportpce_tests/7.1/test03_renderer_or_modes.py 11:59:57 ................................................ [100%] 12:01:41 48 passed in 134.95s (0:02:14) 12:01:41 pytest -q transportpce_tests/7.1/test04_renderer_regen_mode.py 12:02:07 ...................... [100%] 12:02:54 22 passed in 73.28s (0:01:13) 12:02:54 tests71: OK ✔ in 6 minutes 55.4 seconds 12:02:54 tests221: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 12:03:01 tests221: freeze> python -m pip freeze --all 12:03:01 tests221: bcrypt==4.3.0,certifi==2025.6.15,cffi==1.17.1,charset-normalizer==3.4.2,cryptography==45.0.5,dict2xml==1.7.6,idna==3.10,iniconfig==2.1.0,lxml==5.4.0,netconf-client==3.2.0,packaging==25.0,paramiko==3.5.1,pip==25.1.1,pluggy==1.6.0,psutil==7.0.0,pycparser==2.22,Pygments==2.19.2,PyNaCl==1.5.0,pytest==8.4.1,requests==2.32.4,setuptools==80.3.1,urllib3==2.5.0 12:03:01 tests221: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 2.2.1 12:03:01 using environment variables from ./karaf221.env 12:03:01 pytest -q transportpce_tests/2.2.1/test01_portmapping.py 12:03:34 FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [100%] 12:04:17 =================================== FAILURES =================================== 12:04:17 _________ TransportPCEPortMappingTesting.test_01_rdm_device_connection _________ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'PUT' 12:04:17 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-A1' 12:04:17 body = '{"node": [{"node-id": "ROADM-A1", "netconf-node-topology:netconf-node": {"netconf-node-topology:host": "127.0.0.1", "...ff-millis": 1800000, "netconf-node-topology:backoff-multiplier": 1.5, "netconf-node-topology:keepalive-delay": 120}}]}' 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '710', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-A1', query=None, fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'PUT' 12:04:17 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-A1' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-A1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_01_rdm_device_connection(self): 12:04:17 > response = test_utils.mount_device("ROADM-A1", ('roadma', self.NODE_VERSION)) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:50: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:362: in mount_device 12:04:17 response = put_request(url[RESTCONF_VERSION].format('{}', node), body) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:124: in put_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-A1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ---------------------------- Captured stdout setup ----------------------------- 12:04:17 starting OpenDaylight... 12:04:17 starting KARAF TransportPCE build... 12:04:17 Searching for patterns in karaf.log... Pattern found! OpenDaylight started ! 12:04:17 starting simulator xpdra in OpenROADM device version 2.2.1... 12:04:17 Searching for patterns in xpdra-221.log... Pattern found! simulator for xpdra started 12:04:17 starting simulator roadma in OpenROADM device version 2.2.1... 12:04:17 Searching for patterns in roadma-221.log... Pattern found! simulator for roadma started 12:04:17 starting simulator spdra in OpenROADM device version 2.2.1... 12:04:17 Searching for patterns in spdra-221.log... Pattern found! simulator for spdra started 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_01_rdm_device_connection 12:04:17 _________ TransportPCEPortMappingTesting.test_02_rdm_device_connected __________ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-A1?content=nonconfig' 12:04:17 body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-A1', query='content=nonconfig', fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-A1?content=nonconfig' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-A1?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_02_rdm_device_connected(self): 12:04:17 > response = test_utils.check_device_connection("ROADM-A1") 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:54: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:390: in check_device_connection 12:04:17 response = get_request(url[RESTCONF_VERSION].format('{}', node)) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:116: in get_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-A1?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_02_rdm_device_connected 12:04:17 _________ TransportPCEPortMappingTesting.test_03_rdm_portmapping_info __________ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=ROADM-A1/node-info' 12:04:17 body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADM-A1/node-info', query=None, fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=ROADM-A1/node-info' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADM-A1/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_03_rdm_portmapping_info(self): 12:04:17 > response = test_utils.get_portmapping_node_attr("ROADM-A1", "node-info", None) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:59: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:492: in get_portmapping_node_attr 12:04:17 response = get_request(target_url) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:116: in get_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADM-A1/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_03_rdm_portmapping_info 12:04:17 _____ TransportPCEPortMappingTesting.test_04_rdm_portmapping_DEG1_TTP_TXRX _____ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=ROADM-A1/mapping=DEG1-TTP-TXRX' 12:04:17 body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADM-A1/mapping=DEG1-TTP-TXRX', query=None, fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=ROADM-A1/mapping=DEG1-TTP-TXRX' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADM-A1/mapping=DEG1-TTP-TXRX (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_04_rdm_portmapping_DEG1_TTP_TXRX(self): 12:04:17 > response = test_utils.get_portmapping_node_attr("ROADM-A1", "mapping", "DEG1-TTP-TXRX") 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:71: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:492: in get_portmapping_node_attr 12:04:17 response = get_request(target_url) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:116: in get_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADM-A1/mapping=DEG1-TTP-TXRX (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_04_rdm_portmapping_DEG1_TTP_TXRX 12:04:17 _ TransportPCEPortMappingTesting.test_05_rdm_portmapping_DEG2_TTP_TXRX_with_ots_oms _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=ROADM-A1/mapping=DEG2-TTP-TXRX' 12:04:17 body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADM-A1/mapping=DEG2-TTP-TXRX', query=None, fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=ROADM-A1/mapping=DEG2-TTP-TXRX' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADM-A1/mapping=DEG2-TTP-TXRX (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_05_rdm_portmapping_DEG2_TTP_TXRX_with_ots_oms(self): 12:04:17 > response = test_utils.get_portmapping_node_attr("ROADM-A1", "mapping", "DEG2-TTP-TXRX") 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:80: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:492: in get_portmapping_node_attr 12:04:17 response = get_request(target_url) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:116: in get_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADM-A1/mapping=DEG2-TTP-TXRX (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_05_rdm_portmapping_DEG2_TTP_TXRX_with_ots_oms 12:04:17 _____ TransportPCEPortMappingTesting.test_06_rdm_portmapping_SRG1_PP3_TXRX _____ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=ROADM-A1/mapping=SRG1-PP3-TXRX' 12:04:17 body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADM-A1/mapping=SRG1-PP3-TXRX', query=None, fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=ROADM-A1/mapping=SRG1-PP3-TXRX' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADM-A1/mapping=SRG1-PP3-TXRX (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_06_rdm_portmapping_SRG1_PP3_TXRX(self): 12:04:17 > response = test_utils.get_portmapping_node_attr("ROADM-A1", "mapping", "SRG1-PP3-TXRX") 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:91: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:492: in get_portmapping_node_attr 12:04:17 response = get_request(target_url) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:116: in get_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADM-A1/mapping=SRG1-PP3-TXRX (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_06_rdm_portmapping_SRG1_PP3_TXRX 12:04:17 _____ TransportPCEPortMappingTesting.test_07_rdm_portmapping_SRG3_PP1_TXRX _____ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=ROADM-A1/mapping=SRG3-PP1-TXRX' 12:04:17 body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADM-A1/mapping=SRG3-PP1-TXRX', query=None, fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=ROADM-A1/mapping=SRG3-PP1-TXRX' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADM-A1/mapping=SRG3-PP1-TXRX (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_07_rdm_portmapping_SRG3_PP1_TXRX(self): 12:04:17 > response = test_utils.get_portmapping_node_attr("ROADM-A1", "mapping", "SRG3-PP1-TXRX") 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:100: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:492: in get_portmapping_node_attr 12:04:17 response = get_request(target_url) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:116: in get_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADM-A1/mapping=SRG3-PP1-TXRX (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_07_rdm_portmapping_SRG3_PP1_TXRX 12:04:17 ________ TransportPCEPortMappingTesting.test_08_xpdr_device_connection _________ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'PUT' 12:04:17 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-A1' 12:04:17 body = '{"node": [{"node-id": "XPDR-A1", "netconf-node-topology:netconf-node": {"netconf-node-topology:host": "127.0.0.1", "n...ff-millis": 1800000, "netconf-node-topology:backoff-multiplier": 1.5, "netconf-node-topology:keepalive-delay": 120}}]}' 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '709', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-A1', query=None, fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'PUT' 12:04:17 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-A1' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-A1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_08_xpdr_device_connection(self): 12:04:17 > response = test_utils.mount_device("XPDR-A1", ('xpdra', self.NODE_VERSION)) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:109: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:362: in mount_device 12:04:17 response = put_request(url[RESTCONF_VERSION].format('{}', node), body) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:124: in put_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-A1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_08_xpdr_device_connection 12:04:17 _________ TransportPCEPortMappingTesting.test_09_xpdr_device_connected _________ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-A1?content=nonconfig' 12:04:17 body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-A1', query='content=nonconfig', fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-A1?content=nonconfig' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-A1?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_09_xpdr_device_connected(self): 12:04:17 > response = test_utils.check_device_connection("XPDR-A1") 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:113: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:390: in check_device_connection 12:04:17 response = get_request(url[RESTCONF_VERSION].format('{}', node)) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:116: in get_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-A1?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_09_xpdr_device_connected 12:04:17 _________ TransportPCEPortMappingTesting.test_10_xpdr_portmapping_info _________ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=XPDR-A1/node-info' 12:04:17 body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDR-A1/node-info', query=None, fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=XPDR-A1/node-info' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDR-A1/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_10_xpdr_portmapping_info(self): 12:04:17 > response = test_utils.get_portmapping_node_attr("XPDR-A1", "node-info", None) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:118: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:492: in get_portmapping_node_attr 12:04:17 response = get_request(target_url) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:116: in get_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDR-A1/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_10_xpdr_portmapping_info 12:04:17 _______ TransportPCEPortMappingTesting.test_11_xpdr_portmapping_NETWORK1 _______ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=XPDR-A1/mapping=XPDR1-NETWORK1' 12:04:17 body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDR-A1/mapping=XPDR1-NETWORK1', query=None, fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=XPDR-A1/mapping=XPDR1-NETWORK1' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDR-A1/mapping=XPDR1-NETWORK1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_11_xpdr_portmapping_NETWORK1(self): 12:04:17 > response = test_utils.get_portmapping_node_attr("XPDR-A1", "mapping", "XPDR1-NETWORK1") 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:130: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:492: in get_portmapping_node_attr 12:04:17 response = get_request(target_url) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:116: in get_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDR-A1/mapping=XPDR1-NETWORK1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_11_xpdr_portmapping_NETWORK1 12:04:17 ____ TransportPCEPortMappingTesting.test_12_xpdr_portmapping_XPDR2_NETWORK1 ____ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=XPDR-A1/mapping=XPDR1-NETWORK2' 12:04:17 body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDR-A1/mapping=XPDR1-NETWORK2', query=None, fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=XPDR-A1/mapping=XPDR1-NETWORK2' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDR-A1/mapping=XPDR1-NETWORK2 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_12_xpdr_portmapping_XPDR2_NETWORK1(self): 12:04:17 > response = test_utils.get_portmapping_node_attr("XPDR-A1", "mapping", "XPDR1-NETWORK2") 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:143: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:492: in get_portmapping_node_attr 12:04:17 response = get_request(target_url) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:116: in get_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDR-A1/mapping=XPDR1-NETWORK2 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_12_xpdr_portmapping_XPDR2_NETWORK1 12:04:17 ____ TransportPCEPortMappingTesting.test_13_xpdr_portmapping_XPDR1_CLIENT1 _____ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=XPDR-A1/mapping=XPDR1-CLIENT1' 12:04:17 body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDR-A1/mapping=XPDR1-CLIENT1', query=None, fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=XPDR-A1/mapping=XPDR1-CLIENT1' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDR-A1/mapping=XPDR1-CLIENT1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_13_xpdr_portmapping_XPDR1_CLIENT1(self): 12:04:17 > response = test_utils.get_portmapping_node_attr("XPDR-A1", "mapping", "XPDR1-CLIENT1") 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:156: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:492: in get_portmapping_node_attr 12:04:17 response = get_request(target_url) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:116: in get_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDR-A1/mapping=XPDR1-CLIENT1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_13_xpdr_portmapping_XPDR1_CLIENT1 12:04:17 ____ TransportPCEPortMappingTesting.test_14_xpdr_portmapping_XPDR1_CLIENT2 _____ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=XPDR-A1/mapping=XPDR1-CLIENT2' 12:04:17 body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDR-A1/mapping=XPDR1-CLIENT2', query=None, fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=XPDR-A1/mapping=XPDR1-CLIENT2' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDR-A1/mapping=XPDR1-CLIENT2 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_14_xpdr_portmapping_XPDR1_CLIENT2(self): 12:04:17 > response = test_utils.get_portmapping_node_attr("XPDR-A1", "mapping", "XPDR1-CLIENT2") 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:169: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:492: in get_portmapping_node_attr 12:04:17 response = get_request(target_url) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:116: in get_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDR-A1/mapping=XPDR1-CLIENT2 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_14_xpdr_portmapping_XPDR1_CLIENT2 12:04:17 ________ TransportPCEPortMappingTesting.test_15_spdr_device_connection _________ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'PUT' 12:04:17 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SA1' 12:04:17 body = '{"node": [{"node-id": "SPDR-SA1", "netconf-node-topology:netconf-node": {"netconf-node-topology:host": "127.0.0.1", "...ff-millis": 1800000, "netconf-node-topology:backoff-multiplier": 1.5, "netconf-node-topology:keepalive-delay": 120}}]}' 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '710', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SA1', query=None, fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'PUT' 12:04:17 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SA1' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SA1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_15_spdr_device_connection(self): 12:04:17 > response = test_utils.mount_device("SPDR-SA1", ('spdra', self.NODE_VERSION)) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:182: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:362: in mount_device 12:04:17 response = put_request(url[RESTCONF_VERSION].format('{}', node), body) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:124: in put_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SA1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_15_spdr_device_connection 12:04:17 _________ TransportPCEPortMappingTesting.test_16_spdr_device_connected _________ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SA1?content=nonconfig' 12:04:17 body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SA1', query='content=nonconfig', fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SA1?content=nonconfig' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SA1?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_16_spdr_device_connected(self): 12:04:17 > response = test_utils.check_device_connection("SPDR-SA1") 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:186: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:390: in check_device_connection 12:04:17 response = get_request(url[RESTCONF_VERSION].format('{}', node)) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:116: in get_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SA1?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_16_spdr_device_connected 12:04:17 _________ TransportPCEPortMappingTesting.test_17_spdr_portmapping_info _________ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/node-info' 12:04:17 body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/node-info', query=None, fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/node-info' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_17_spdr_portmapping_info(self): 12:04:17 > response = test_utils.get_portmapping_node_attr("SPDR-SA1", "node-info", None) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:191: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:492: in get_portmapping_node_attr 12:04:17 response = get_request(target_url) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:116: in get_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_17_spdr_portmapping_info 12:04:17 _________ TransportPCEPortMappingTesting.test_18_spdr_switching_pool_1 _________ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/switching-pool-lcp=1' 12:04:17 body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/switching-pool-lcp=1', query=None, fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/switching-pool-lcp=1' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/switching-pool-lcp=1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_18_spdr_switching_pool_1(self): 12:04:17 > response = test_utils.get_portmapping_node_attr("SPDR-SA1", "switching-pool-lcp", "1") 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:203: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:492: in get_portmapping_node_attr 12:04:17 response = get_request(target_url) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:116: in get_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/switching-pool-lcp=1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_18_spdr_switching_pool_1 12:04:17 _________ TransportPCEPortMappingTesting.test_19_spdr_switching_pool_2 _________ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/switching-pool-lcp=2' 12:04:17 body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/switching-pool-lcp=2', query=None, fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/switching-pool-lcp=2' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/switching-pool-lcp=2 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_19_spdr_switching_pool_2(self): 12:04:17 > response = test_utils.get_portmapping_node_attr("SPDR-SA1", "switching-pool-lcp", "2") 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:222: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:492: in get_portmapping_node_attr 12:04:17 response = get_request(target_url) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:116: in get_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/switching-pool-lcp=2 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_19_spdr_switching_pool_2 12:04:17 _________ TransportPCEPortMappingTesting.test_20_spdr_switching_pool_3 _________ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/switching-pool-lcp=3' 12:04:17 body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/switching-pool-lcp=3', query=None, fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/switching-pool-lcp=3' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/switching-pool-lcp=3 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_20_spdr_switching_pool_3(self): 12:04:17 > response = test_utils.get_portmapping_node_attr("SPDR-SA1", "switching-pool-lcp", "3") 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:241: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:492: in get_portmapping_node_attr 12:04:17 response = get_request(target_url) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:116: in get_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/switching-pool-lcp=3 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_20_spdr_switching_pool_3 12:04:17 _______ TransportPCEPortMappingTesting.test_21_spdr_portmapping_mappings _______ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1', body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1', query=None, fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=SPDR-SA1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_21_spdr_portmapping_mappings(self): 12:04:17 > response = test_utils.get_portmapping_node_attr("SPDR-SA1", None, None) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:255: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:492: in get_portmapping_node_attr 12:04:17 response = get_request(target_url) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:116: in get_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=SPDR-SA1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_21_spdr_portmapping_mappings 12:04:17 ____ TransportPCEPortMappingTesting.test_22_spdr_portmapping_XPDR1_CLIENT1 _____ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/mapping=XPDR1-CLIENT1' 12:04:17 body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/mapping=XPDR1-CLIENT1', query=None, fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/mapping=XPDR1-CLIENT1' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/mapping=XPDR1-CLIENT1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_22_spdr_portmapping_XPDR1_CLIENT1(self): 12:04:17 > response = test_utils.get_portmapping_node_attr("SPDR-SA1", "mapping", "XPDR1-CLIENT1") 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:260: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:492: in get_portmapping_node_attr 12:04:17 response = get_request(target_url) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:116: in get_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/mapping=XPDR1-CLIENT1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_22_spdr_portmapping_XPDR1_CLIENT1 12:04:17 ____ TransportPCEPortMappingTesting.test_23_spdr_portmapping_XPDR1_NETWORK1 ____ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/mapping=XPDR1-NETWORK1' 12:04:17 body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/mapping=XPDR1-NETWORK1', query=None, fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/mapping=XPDR1-NETWORK1' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/mapping=XPDR1-NETWORK1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_23_spdr_portmapping_XPDR1_NETWORK1(self): 12:04:17 > response = test_utils.get_portmapping_node_attr("SPDR-SA1", "mapping", "XPDR1-NETWORK1") 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:279: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:492: in get_portmapping_node_attr 12:04:17 response = get_request(target_url) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:116: in get_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/mapping=XPDR1-NETWORK1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_23_spdr_portmapping_XPDR1_NETWORK1 12:04:17 ____ TransportPCEPortMappingTesting.test_24_spdr_portmapping_XPDR2_CLIENT2 _____ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/mapping=XPDR2-CLIENT2' 12:04:17 body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/mapping=XPDR2-CLIENT2', query=None, fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/mapping=XPDR2-CLIENT2' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/mapping=XPDR2-CLIENT2 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_24_spdr_portmapping_XPDR2_CLIENT2(self): 12:04:17 > response = test_utils.get_portmapping_node_attr("SPDR-SA1", "mapping", "XPDR2-CLIENT2") 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:292: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:492: in get_portmapping_node_attr 12:04:17 response = get_request(target_url) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:116: in get_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/mapping=XPDR2-CLIENT2 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_24_spdr_portmapping_XPDR2_CLIENT2 12:04:17 ____ TransportPCEPortMappingTesting.test_25_spdr_portmapping_XPDR2_NETWORK2 ____ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/mapping=XPDR2-NETWORK2' 12:04:17 body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/mapping=XPDR2-NETWORK2', query=None, fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/mapping=XPDR2-NETWORK2' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/mapping=XPDR2-NETWORK2 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_25_spdr_portmapping_XPDR2_NETWORK2(self): 12:04:17 > response = test_utils.get_portmapping_node_attr("SPDR-SA1", "mapping", "XPDR2-NETWORK2") 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:311: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:492: in get_portmapping_node_attr 12:04:17 response = get_request(target_url) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:116: in get_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/mapping=XPDR2-NETWORK2 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_25_spdr_portmapping_XPDR2_NETWORK2 12:04:17 ____ TransportPCEPortMappingTesting.test_26_spdr_portmapping_XPDR3_CLIENT3 _____ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/mapping=XPDR3-CLIENT3' 12:04:17 body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/mapping=XPDR3-CLIENT3', query=None, fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/mapping=XPDR3-CLIENT3' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/mapping=XPDR3-CLIENT3 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_26_spdr_portmapping_XPDR3_CLIENT3(self): 12:04:17 > response = test_utils.get_portmapping_node_attr("SPDR-SA1", "mapping", "XPDR3-CLIENT3") 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:324: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:492: in get_portmapping_node_attr 12:04:17 response = get_request(target_url) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:116: in get_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/mapping=XPDR3-CLIENT3 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_26_spdr_portmapping_XPDR3_CLIENT3 12:04:17 ____ TransportPCEPortMappingTesting.test_27_spdr_portmapping_XPDR3_NETWORK1 ____ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/mapping=XPDR3-NETWORK1' 12:04:17 body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/mapping=XPDR3-NETWORK1', query=None, fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/mapping=XPDR3-NETWORK1' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/mapping=XPDR3-NETWORK1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_27_spdr_portmapping_XPDR3_NETWORK1(self): 12:04:17 > response = test_utils.get_portmapping_node_attr("SPDR-SA1", "mapping", "XPDR3-NETWORK1") 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:337: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:492: in get_portmapping_node_attr 12:04:17 response = get_request(target_url) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:116: in get_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=SPDR-SA1/mapping=XPDR3-NETWORK1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_27_spdr_portmapping_XPDR3_NETWORK1 12:04:17 _______ TransportPCEPortMappingTesting.test_28_spdr_device_disconnection _______ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'DELETE' 12:04:17 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SA1' 12:04:17 body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SA1', query=None, fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'DELETE' 12:04:17 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SA1' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SA1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_28_spdr_device_disconnection(self): 12:04:17 > response = test_utils.unmount_device("SPDR-SA1") 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:350: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:379: in unmount_device 12:04:17 response = delete_request(url[RESTCONF_VERSION].format('{}', node)) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:133: in delete_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SA1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_28_spdr_device_disconnection 12:04:17 _______ TransportPCEPortMappingTesting.test_29_xpdr_device_disconnected ________ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SA1?content=nonconfig' 12:04:17 body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SA1', query='content=nonconfig', fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SA1?content=nonconfig' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SA1?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_29_xpdr_device_disconnected(self): 12:04:17 > response = test_utils.check_device_connection("SPDR-SA1") 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:354: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:390: in check_device_connection 12:04:17 response = get_request(url[RESTCONF_VERSION].format('{}', node)) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:116: in get_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=SPDR-SA1?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_29_xpdr_device_disconnected 12:04:17 _______ TransportPCEPortMappingTesting.test_30_xpdr_device_disconnection _______ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'DELETE' 12:04:17 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-A1' 12:04:17 body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-A1', query=None, fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'DELETE' 12:04:17 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-A1' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-A1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_30_xpdr_device_disconnection(self): 12:04:17 > response = test_utils.unmount_device("XPDR-A1") 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:362: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:379: in unmount_device 12:04:17 response = delete_request(url[RESTCONF_VERSION].format('{}', node)) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:133: in delete_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-A1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_30_xpdr_device_disconnection 12:04:17 _______ TransportPCEPortMappingTesting.test_31_xpdr_device_disconnected ________ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-A1?content=nonconfig' 12:04:17 body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-A1', query='content=nonconfig', fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-A1?content=nonconfig' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-A1?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_31_xpdr_device_disconnected(self): 12:04:17 > response = test_utils.check_device_connection("XPDR-A1") 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:366: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:390: in check_device_connection 12:04:17 response = get_request(url[RESTCONF_VERSION].format('{}', node)) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:116: in get_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-A1?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_31_xpdr_device_disconnected 12:04:17 _______ TransportPCEPortMappingTesting.test_32_xpdr_device_not_connected _______ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=XPDR-A1/node-info' 12:04:17 body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDR-A1/node-info', query=None, fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=XPDR-A1/node-info' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDR-A1/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_32_xpdr_device_not_connected(self): 12:04:17 > response = test_utils.get_portmapping_node_attr("XPDR-A1", "node-info", None) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:374: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:492: in get_portmapping_node_attr 12:04:17 response = get_request(target_url) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:116: in get_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDR-A1/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_32_xpdr_device_not_connected 12:04:17 _______ TransportPCEPortMappingTesting.test_33_rdm_device_disconnection ________ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'DELETE' 12:04:17 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-A1' 12:04:17 body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-A1', query=None, fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'DELETE' 12:04:17 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-A1' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-A1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_33_rdm_device_disconnection(self): 12:04:17 > response = test_utils.unmount_device("ROADM-A1") 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:382: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:379: in unmount_device 12:04:17 response = delete_request(url[RESTCONF_VERSION].format('{}', node)) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:133: in delete_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-A1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_33_rdm_device_disconnection 12:04:17 ________ TransportPCEPortMappingTesting.test_34_rdm_device_disconnected ________ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-A1?content=nonconfig' 12:04:17 body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-A1', query='content=nonconfig', fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-A1?content=nonconfig' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-A1?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_34_rdm_device_disconnected(self): 12:04:17 > response = test_utils.check_device_connection("ROADM-A1") 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:386: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:390: in check_device_connection 12:04:17 response = get_request(url[RESTCONF_VERSION].format('{}', node)) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:116: in get_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADM-A1?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_34_rdm_device_disconnected 12:04:17 _______ TransportPCEPortMappingTesting.test_35_rdm_device_not_connected ________ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 > sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:198: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 12:04:17 raise err 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 address = ('localhost', 8183), timeout = 30, source_address = None 12:04:17 socket_options = [(6, 1, 1)] 12:04:17 12:04:17 def create_connection( 12:04:17 address: tuple[str, int], 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 source_address: tuple[str, int] | None = None, 12:04:17 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 12:04:17 ) -> socket.socket: 12:04:17 """Connect to *address* and return the socket object. 12:04:17 12:04:17 Convenience function. Connect to *address* (a 2-tuple ``(host, 12:04:17 port)``) and return the socket object. Passing the optional 12:04:17 *timeout* parameter will set the timeout on the socket instance 12:04:17 before attempting to connect. If no *timeout* is supplied, the 12:04:17 global default timeout setting returned by :func:`socket.getdefaulttimeout` 12:04:17 is used. If *source_address* is set it must be a tuple of (host, port) 12:04:17 for the socket to bind as a source address before making the connection. 12:04:17 An host of '' or port 0 tells the OS to use the default. 12:04:17 """ 12:04:17 12:04:17 host, port = address 12:04:17 if host.startswith("["): 12:04:17 host = host.strip("[]") 12:04:17 err = None 12:04:17 12:04:17 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 12:04:17 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 12:04:17 # The original create_connection function always returns all records. 12:04:17 family = allowed_gai_family() 12:04:17 12:04:17 try: 12:04:17 host.encode("idna") 12:04:17 except UnicodeError: 12:04:17 raise LocationParseError(f"'{host}', label empty or too long") from None 12:04:17 12:04:17 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 12:04:17 af, socktype, proto, canonname, sa = res 12:04:17 sock = None 12:04:17 try: 12:04:17 sock = socket.socket(af, socktype, proto) 12:04:17 12:04:17 # If provided, set socket level options before connecting. 12:04:17 _set_socket_options(sock, socket_options) 12:04:17 12:04:17 if timeout is not _DEFAULT_TIMEOUT: 12:04:17 sock.settimeout(timeout) 12:04:17 if source_address: 12:04:17 sock.bind(source_address) 12:04:17 > sock.connect(sa) 12:04:17 E ConnectionRefusedError: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=ROADM-A1/node-info' 12:04:17 body = None 12:04:17 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 12:04:17 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 redirect = False, assert_same_host = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 12:04:17 release_conn = False, chunked = False, body_pos = None, preload_content = False 12:04:17 decode_content = False, response_kw = {} 12:04:17 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADM-A1/node-info', query=None, fragment=None) 12:04:17 destination_scheme = None, conn = None, release_this_conn = True 12:04:17 http_tunnel_required = False, err = None, clean_exit = False 12:04:17 12:04:17 def urlopen( # type: ignore[override] 12:04:17 self, 12:04:17 method: str, 12:04:17 url: str, 12:04:17 body: _TYPE_BODY | None = None, 12:04:17 headers: typing.Mapping[str, str] | None = None, 12:04:17 retries: Retry | bool | int | None = None, 12:04:17 redirect: bool = True, 12:04:17 assert_same_host: bool = True, 12:04:17 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 12:04:17 pool_timeout: int | None = None, 12:04:17 release_conn: bool | None = None, 12:04:17 chunked: bool = False, 12:04:17 body_pos: _TYPE_BODY_POSITION | None = None, 12:04:17 preload_content: bool = True, 12:04:17 decode_content: bool = True, 12:04:17 **response_kw: typing.Any, 12:04:17 ) -> BaseHTTPResponse: 12:04:17 """ 12:04:17 Get a connection from the pool and perform an HTTP request. This is the 12:04:17 lowest level call for making a request, so you'll need to specify all 12:04:17 the raw details. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 More commonly, it's appropriate to use a convenience method 12:04:17 such as :meth:`request`. 12:04:17 12:04:17 .. note:: 12:04:17 12:04:17 `release_conn` will only behave as expected if 12:04:17 `preload_content=False` because we want to make 12:04:17 `preload_content=False` the default behaviour someday soon without 12:04:17 breaking backwards compatibility. 12:04:17 12:04:17 :param method: 12:04:17 HTTP request method (such as GET, POST, PUT, etc.) 12:04:17 12:04:17 :param url: 12:04:17 The URL to perform the request on. 12:04:17 12:04:17 :param body: 12:04:17 Data to send in the request body, either :class:`str`, :class:`bytes`, 12:04:17 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 12:04:17 12:04:17 :param headers: 12:04:17 Dictionary of custom headers to send, such as User-Agent, 12:04:17 If-None-Match, etc. If None, pool headers are used. If provided, 12:04:17 these headers completely replace any pool-specific headers. 12:04:17 12:04:17 :param retries: 12:04:17 Configure the number of retries to allow before raising a 12:04:17 :class:`~urllib3.exceptions.MaxRetryError` exception. 12:04:17 12:04:17 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 12:04:17 :class:`~urllib3.util.retry.Retry` object for fine-grained control 12:04:17 over different types of retries. 12:04:17 Pass an integer number to retry connection errors that many times, 12:04:17 but no other types of errors. Pass zero to never retry. 12:04:17 12:04:17 If ``False``, then retries are disabled and any exception is raised 12:04:17 immediately. Also, instead of raising a MaxRetryError on redirects, 12:04:17 the redirect response will be returned. 12:04:17 12:04:17 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 12:04:17 12:04:17 :param redirect: 12:04:17 If True, automatically handle redirects (status codes 301, 302, 12:04:17 303, 307, 308). Each redirect counts as a retry. Disabling retries 12:04:17 will disable redirect, too. 12:04:17 12:04:17 :param assert_same_host: 12:04:17 If ``True``, will make sure that the host of the pool requests is 12:04:17 consistent else will raise HostChangedError. When ``False``, you can 12:04:17 use the pool on an HTTP proxy and request foreign hosts. 12:04:17 12:04:17 :param timeout: 12:04:17 If specified, overrides the default timeout for this one 12:04:17 request. It may be a float (in seconds) or an instance of 12:04:17 :class:`urllib3.util.Timeout`. 12:04:17 12:04:17 :param pool_timeout: 12:04:17 If set and the pool is set to block=True, then this method will 12:04:17 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 12:04:17 connection is available within the time period. 12:04:17 12:04:17 :param bool preload_content: 12:04:17 If True, the response's body will be preloaded into memory. 12:04:17 12:04:17 :param bool decode_content: 12:04:17 If True, will attempt to decode the body based on the 12:04:17 'content-encoding' header. 12:04:17 12:04:17 :param release_conn: 12:04:17 If False, then the urlopen call will not release the connection 12:04:17 back into the pool once a response is received (but will release if 12:04:17 you read the entire contents of the response such as when 12:04:17 `preload_content=True`). This is useful if you're not preloading 12:04:17 the response's content immediately. You will need to call 12:04:17 ``r.release_conn()`` on the response ``r`` to return the connection 12:04:17 back into the pool. If None, it takes the value of ``preload_content`` 12:04:17 which defaults to ``True``. 12:04:17 12:04:17 :param bool chunked: 12:04:17 If True, urllib3 will send the body using chunked transfer 12:04:17 encoding. Otherwise, urllib3 will send the body using the standard 12:04:17 content-length form. Defaults to False. 12:04:17 12:04:17 :param int body_pos: 12:04:17 Position to seek to in file-like body in the event of a retry or 12:04:17 redirect. Typically this won't need to be set because urllib3 will 12:04:17 auto-populate the value when needed. 12:04:17 """ 12:04:17 parsed_url = parse_url(url) 12:04:17 destination_scheme = parsed_url.scheme 12:04:17 12:04:17 if headers is None: 12:04:17 headers = self.headers 12:04:17 12:04:17 if not isinstance(retries, Retry): 12:04:17 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 12:04:17 12:04:17 if release_conn is None: 12:04:17 release_conn = preload_content 12:04:17 12:04:17 # Check host 12:04:17 if assert_same_host and not self.is_same_host(url): 12:04:17 raise HostChangedError(self, url, retries) 12:04:17 12:04:17 # Ensure that the URL we're connecting to is properly encoded 12:04:17 if url.startswith("/"): 12:04:17 url = to_str(_encode_target(url)) 12:04:17 else: 12:04:17 url = to_str(parsed_url.url) 12:04:17 12:04:17 conn = None 12:04:17 12:04:17 # Track whether `conn` needs to be released before 12:04:17 # returning/raising/recursing. Update this variable if necessary, and 12:04:17 # leave `release_conn` constant throughout the function. That way, if 12:04:17 # the function recurses, the original value of `release_conn` will be 12:04:17 # passed down into the recursive call, and its value will be respected. 12:04:17 # 12:04:17 # See issue #651 [1] for details. 12:04:17 # 12:04:17 # [1] 12:04:17 release_this_conn = release_conn 12:04:17 12:04:17 http_tunnel_required = connection_requires_http_tunnel( 12:04:17 self.proxy, self.proxy_config, destination_scheme 12:04:17 ) 12:04:17 12:04:17 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 12:04:17 # have to copy the headers dict so we can safely change it without those 12:04:17 # changes being reflected in anyone else's copy. 12:04:17 if not http_tunnel_required: 12:04:17 headers = headers.copy() # type: ignore[attr-defined] 12:04:17 headers.update(self.proxy_headers) # type: ignore[union-attr] 12:04:17 12:04:17 # Must keep the exception bound to a separate variable or else Python 3 12:04:17 # complains about UnboundLocalError. 12:04:17 err = None 12:04:17 12:04:17 # Keep track of whether we cleanly exited the except block. This 12:04:17 # ensures we do proper cleanup in finally. 12:04:17 clean_exit = False 12:04:17 12:04:17 # Rewind body position, if needed. Record current position 12:04:17 # for future rewinds in the event of a redirect/retry. 12:04:17 body_pos = set_file_position(body, body_pos) 12:04:17 12:04:17 try: 12:04:17 # Request a connection from the queue. 12:04:17 timeout_obj = self._get_timeout(timeout) 12:04:17 conn = self._get_conn(timeout=pool_timeout) 12:04:17 12:04:17 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 12:04:17 12:04:17 # Is this a closed/new connection that requires CONNECT tunnelling? 12:04:17 if self.proxy is not None and http_tunnel_required and conn.is_closed: 12:04:17 try: 12:04:17 self._prepare_proxy(conn) 12:04:17 except (BaseSSLError, OSError, SocketTimeout) as e: 12:04:17 self._raise_timeout( 12:04:17 err=e, url=self.proxy.url, timeout_value=conn.timeout 12:04:17 ) 12:04:17 raise 12:04:17 12:04:17 # If we're going to release the connection in ``finally:``, then 12:04:17 # the response doesn't need to know about the connection. Otherwise 12:04:17 # it will also try to release it and we'll have a double-release 12:04:17 # mess. 12:04:17 response_conn = conn if not release_conn else None 12:04:17 12:04:17 # Make the request on the HTTPConnection object 12:04:17 > response = self._make_request( 12:04:17 conn, 12:04:17 method, 12:04:17 url, 12:04:17 timeout=timeout_obj, 12:04:17 body=body, 12:04:17 headers=headers, 12:04:17 chunked=chunked, 12:04:17 retries=retries, 12:04:17 response_conn=response_conn, 12:04:17 preload_content=preload_content, 12:04:17 decode_content=decode_content, 12:04:17 **response_kw, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 12:04:17 conn.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:494: in request 12:04:17 self.endheaders() 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 12:04:17 self._send_output(message_body, encode_chunked=encode_chunked) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 12:04:17 self.send(msg) 12:04:17 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 12:04:17 self.connect() 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 12:04:17 self.sock = self._new_conn() 12:04:17 ^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 12:04:17 def _new_conn(self) -> socket.socket: 12:04:17 """Establish a socket connection and set nodelay settings on it. 12:04:17 12:04:17 :return: New socket connection. 12:04:17 """ 12:04:17 try: 12:04:17 sock = connection.create_connection( 12:04:17 (self._dns_host, self.port), 12:04:17 self.timeout, 12:04:17 source_address=self.source_address, 12:04:17 socket_options=self.socket_options, 12:04:17 ) 12:04:17 except socket.gaierror as e: 12:04:17 raise NameResolutionError(self.host, self, e) from e 12:04:17 except SocketTimeout as e: 12:04:17 raise ConnectTimeoutError( 12:04:17 self, 12:04:17 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 12:04:17 ) from e 12:04:17 12:04:17 except OSError as e: 12:04:17 > raise NewConnectionError( 12:04:17 self, f"Failed to establish a new connection: {e}" 12:04:17 ) from e 12:04:17 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 12:04:17 12:04:17 The above exception was the direct cause of the following exception: 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 > resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:667: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 12:04:17 retries = retries.increment( 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 12:04:17 method = 'GET' 12:04:17 url = '/rests/data/transportpce-portmapping:network/nodes=ROADM-A1/node-info' 12:04:17 response = None 12:04:17 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 12:04:17 _pool = 12:04:17 _stacktrace = 12:04:17 12:04:17 def increment( 12:04:17 self, 12:04:17 method: str | None = None, 12:04:17 url: str | None = None, 12:04:17 response: BaseHTTPResponse | None = None, 12:04:17 error: Exception | None = None, 12:04:17 _pool: ConnectionPool | None = None, 12:04:17 _stacktrace: TracebackType | None = None, 12:04:17 ) -> Self: 12:04:17 """Return a new Retry object with incremented retry counters. 12:04:17 12:04:17 :param response: A response object, or None, if the server did not 12:04:17 return a response. 12:04:17 :type response: :class:`~urllib3.response.BaseHTTPResponse` 12:04:17 :param Exception error: An error encountered during the request, or 12:04:17 None if the response was received successfully. 12:04:17 12:04:17 :return: A new ``Retry`` object. 12:04:17 """ 12:04:17 if self.total is False and error: 12:04:17 # Disabled, indicate to re-raise the error. 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 12:04:17 total = self.total 12:04:17 if total is not None: 12:04:17 total -= 1 12:04:17 12:04:17 connect = self.connect 12:04:17 read = self.read 12:04:17 redirect = self.redirect 12:04:17 status_count = self.status 12:04:17 other = self.other 12:04:17 cause = "unknown" 12:04:17 status = None 12:04:17 redirect_location = None 12:04:17 12:04:17 if error and self._is_connection_error(error): 12:04:17 # Connect retry? 12:04:17 if connect is False: 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif connect is not None: 12:04:17 connect -= 1 12:04:17 12:04:17 elif error and self._is_read_error(error): 12:04:17 # Read retry? 12:04:17 if read is False or method is None or not self._is_method_retryable(method): 12:04:17 raise reraise(type(error), error, _stacktrace) 12:04:17 elif read is not None: 12:04:17 read -= 1 12:04:17 12:04:17 elif error: 12:04:17 # Other retry? 12:04:17 if other is not None: 12:04:17 other -= 1 12:04:17 12:04:17 elif response and response.get_redirect_location(): 12:04:17 # Redirect retry? 12:04:17 if redirect is not None: 12:04:17 redirect -= 1 12:04:17 cause = "too many redirects" 12:04:17 response_redirect_location = response.get_redirect_location() 12:04:17 if response_redirect_location: 12:04:17 redirect_location = response_redirect_location 12:04:17 status = response.status 12:04:17 12:04:17 else: 12:04:17 # Incrementing because of a server error like a 500 in 12:04:17 # status_forcelist and the given method is in the allowed_methods 12:04:17 cause = ResponseError.GENERIC_ERROR 12:04:17 if response and response.status: 12:04:17 if status_count is not None: 12:04:17 status_count -= 1 12:04:17 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 12:04:17 status = response.status 12:04:17 12:04:17 history = self.history + ( 12:04:17 RequestHistory(method, url, error, status, redirect_location), 12:04:17 ) 12:04:17 12:04:17 new_retry = self.new( 12:04:17 total=total, 12:04:17 connect=connect, 12:04:17 read=read, 12:04:17 redirect=redirect, 12:04:17 status=status_count, 12:04:17 other=other, 12:04:17 history=history, 12:04:17 ) 12:04:17 12:04:17 if new_retry.is_exhausted(): 12:04:17 reason = error or ResponseError(cause) 12:04:17 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADM-A1/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 12:04:17 12:04:17 During handling of the above exception, another exception occurred: 12:04:17 12:04:17 self = 12:04:17 12:04:17 def test_35_rdm_device_not_connected(self): 12:04:17 > response = test_utils.get_portmapping_node_attr("ROADM-A1", "node-info", None) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 12:04:17 transportpce_tests/2.2.1/test01_portmapping.py:394: 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 transportpce_tests/common/test_utils.py:492: in get_portmapping_node_attr 12:04:17 response = get_request(target_url) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 transportpce_tests/common/test_utils.py:116: in get_request 12:04:17 return requests.request( 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/api.py:59: in request 12:04:17 return session.request(method=method, url=url, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:589: in request 12:04:17 resp = self.send(prep, **send_kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/sessions.py:703: in send 12:04:17 r = adapter.send(request, **kwargs) 12:04:17 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 12:04:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 12:04:17 12:04:17 self = 12:04:17 request = , stream = False 12:04:17 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 12:04:17 proxies = OrderedDict() 12:04:17 12:04:17 def send( 12:04:17 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 12:04:17 ): 12:04:17 """Sends PreparedRequest object. Returns Response object. 12:04:17 12:04:17 :param request: The :class:`PreparedRequest ` being sent. 12:04:17 :param stream: (optional) Whether to stream the request content. 12:04:17 :param timeout: (optional) How long to wait for the server to send 12:04:17 data before giving up, as a float, or a :ref:`(connect timeout, 12:04:17 read timeout) ` tuple. 12:04:17 :type timeout: float or tuple or urllib3 Timeout object 12:04:17 :param verify: (optional) Either a boolean, in which case it controls whether 12:04:17 we verify the server's TLS certificate, or a string, in which case it 12:04:17 must be a path to a CA bundle to use 12:04:17 :param cert: (optional) Any user-provided SSL certificate to be trusted. 12:04:17 :param proxies: (optional) The proxies dictionary to apply to the request. 12:04:17 :rtype: requests.Response 12:04:17 """ 12:04:17 12:04:17 try: 12:04:17 conn = self.get_connection_with_tls_context( 12:04:17 request, verify, proxies=proxies, cert=cert 12:04:17 ) 12:04:17 except LocationValueError as e: 12:04:17 raise InvalidURL(e, request=request) 12:04:17 12:04:17 self.cert_verify(conn, request.url, verify, cert) 12:04:17 url = self.request_url(request, proxies) 12:04:17 self.add_headers( 12:04:17 request, 12:04:17 stream=stream, 12:04:17 timeout=timeout, 12:04:17 verify=verify, 12:04:17 cert=cert, 12:04:17 proxies=proxies, 12:04:17 ) 12:04:17 12:04:17 chunked = not (request.body is None or "Content-Length" in request.headers) 12:04:17 12:04:17 if isinstance(timeout, tuple): 12:04:17 try: 12:04:17 connect, read = timeout 12:04:17 timeout = TimeoutSauce(connect=connect, read=read) 12:04:17 except ValueError: 12:04:17 raise ValueError( 12:04:17 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 12:04:17 f"or a single float to set both timeouts to the same value." 12:04:17 ) 12:04:17 elif isinstance(timeout, TimeoutSauce): 12:04:17 pass 12:04:17 else: 12:04:17 timeout = TimeoutSauce(connect=timeout, read=timeout) 12:04:17 12:04:17 try: 12:04:17 resp = conn.urlopen( 12:04:17 method=request.method, 12:04:17 url=url, 12:04:17 body=request.body, 12:04:17 headers=request.headers, 12:04:17 redirect=False, 12:04:17 assert_same_host=False, 12:04:17 preload_content=False, 12:04:17 decode_content=False, 12:04:17 retries=self.max_retries, 12:04:17 timeout=timeout, 12:04:17 chunked=chunked, 12:04:17 ) 12:04:17 12:04:17 except (ProtocolError, OSError) as err: 12:04:17 raise ConnectionError(err, request=request) 12:04:17 12:04:17 except MaxRetryError as e: 12:04:17 if isinstance(e.reason, ConnectTimeoutError): 12:04:17 # TODO: Remove this in 3.0.0: see #2811 12:04:17 if not isinstance(e.reason, NewConnectionError): 12:04:17 raise ConnectTimeout(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, ResponseError): 12:04:17 raise RetryError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _ProxyError): 12:04:17 raise ProxyError(e, request=request) 12:04:17 12:04:17 if isinstance(e.reason, _SSLError): 12:04:17 # This branch is for urllib3 v1.22 and later. 12:04:17 raise SSLError(e, request=request) 12:04:17 12:04:17 > raise ConnectionError(e, request=request) 12:04:17 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8183): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADM-A1/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 12:04:17 12:04:17 ../.tox/tests221/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 12:04:17 ----------------------------- Captured stdout call ----------------------------- 12:04:17 execution of test_35_rdm_device_not_connected 12:04:17 --------------------------- Captured stdout teardown --------------------------- 12:04:17 all processes killed 12:04:17 =========================== short test summary info ============================ 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_01_rdm_device_connection 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_02_rdm_device_connected 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_03_rdm_portmapping_info 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_04_rdm_portmapping_DEG1_TTP_TXRX 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_05_rdm_portmapping_DEG2_TTP_TXRX_with_ots_oms 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_06_rdm_portmapping_SRG1_PP3_TXRX 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_07_rdm_portmapping_SRG3_PP1_TXRX 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_08_xpdr_device_connection 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_09_xpdr_device_connected 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_10_xpdr_portmapping_info 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_11_xpdr_portmapping_NETWORK1 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_12_xpdr_portmapping_XPDR2_NETWORK1 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_13_xpdr_portmapping_XPDR1_CLIENT1 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_14_xpdr_portmapping_XPDR1_CLIENT2 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_15_spdr_device_connection 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_16_spdr_device_connected 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_17_spdr_portmapping_info 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_18_spdr_switching_pool_1 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_19_spdr_switching_pool_2 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_20_spdr_switching_pool_3 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_21_spdr_portmapping_mappings 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_22_spdr_portmapping_XPDR1_CLIENT1 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_23_spdr_portmapping_XPDR1_NETWORK1 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_24_spdr_portmapping_XPDR2_CLIENT2 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_25_spdr_portmapping_XPDR2_NETWORK2 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_26_spdr_portmapping_XPDR3_CLIENT3 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_27_spdr_portmapping_XPDR3_NETWORK1 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_28_spdr_device_disconnection 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_29_xpdr_device_disconnected 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_30_xpdr_device_disconnection 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_31_xpdr_device_disconnected 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_32_xpdr_device_not_connected 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_33_rdm_device_disconnection 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_34_rdm_device_disconnected 12:04:17 FAILED transportpce_tests/2.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_35_rdm_device_not_connected 12:04:17 35 failed in 75.86s (0:01:15) 12:04:17 tests221: exit 1 (76.18 seconds) /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 2.2.1 pid=29725 12:04:17 tests221: FAIL ✖ in 1 minute 22.71 seconds 12:04:17 tests121: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 12:04:24 tests121: freeze> python -m pip freeze --all 12:04:24 tests121: bcrypt==4.3.0,certifi==2025.6.15,cffi==1.17.1,charset-normalizer==3.4.2,cryptography==45.0.5,dict2xml==1.7.6,idna==3.10,iniconfig==2.1.0,lxml==5.4.0,netconf-client==3.2.0,packaging==25.0,paramiko==3.5.1,pip==25.1.1,pluggy==1.6.0,psutil==7.0.0,pycparser==2.22,Pygments==2.19.2,PyNaCl==1.5.0,pytest==8.4.1,requests==2.32.4,setuptools==80.3.1,urllib3==2.5.0 12:04:24 tests121: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 1.2.1 12:04:24 using environment variables from ./karaf121.env 12:04:24 pytest -q transportpce_tests/1.2.1/test01_portmapping.py 12:04:59 ..................... [100%] 12:05:49 21 passed in 84.57s (0:01:24) 12:05:49 pytest -q transportpce_tests/1.2.1/test02_topo_portmapping.py 12:06:33 ...... [100%] 12:06:47 6 passed in 57.79s 12:06:47 pytest -q transportpce_tests/1.2.1/test03_topology.py 12:07:29 ............................................ [100%] 12:09:04 44 passed in 137.07s (0:02:17) 12:09:04 pytest -q transportpce_tests/1.2.1/test04_renderer_service_path_nominal.py 12:09:35 ........................ [100%] 12:10:26 24 passed in 81.95s (0:01:21) 12:10:26 pytest -q transportpce_tests/1.2.1/test05_olm.py 12:11:05 ........................................ [100%] 12:13:27 40 passed in 180.82s (0:03:00) 12:13:27 pytest -q transportpce_tests/1.2.1/test06_end2end.py 12:14:07 ...................................................... [100%] 12:22:22 54 passed in 534.53s (0:08:54) 12:22:22 tests121: OK ✔ in 18 minutes 5.27 seconds 12:22:22 tests_hybrid: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 12:22:29 tests_hybrid: freeze> python -m pip freeze --all 12:22:30 tests_hybrid: bcrypt==4.3.0,certifi==2025.6.15,cffi==1.17.1,charset-normalizer==3.4.2,cryptography==45.0.5,dict2xml==1.7.6,idna==3.10,iniconfig==2.1.0,lxml==5.4.0,netconf-client==3.2.0,packaging==25.0,paramiko==3.5.1,pip==25.1.1,pluggy==1.6.0,psutil==7.0.0,pycparser==2.22,Pygments==2.19.2,PyNaCl==1.5.0,pytest==8.4.1,requests==2.32.4,setuptools==80.3.1,urllib3==2.5.0 12:22:30 tests_hybrid: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh hybrid 12:22:30 using environment variables from ./karaf121.env 12:22:30 pytest -q transportpce_tests/hybrid/test01_device_change_notifications.py 12:23:15 ................................................... [100%] 12:25:01 51 passed in 151.34s (0:02:31) 12:25:01 pytest -q transportpce_tests/hybrid/test02_B100G_end2end.py 12:25:43 ........................................................................ [ 66%] 12:30:03 ..................................... [100%] 12:32:09 109 passed in 427.77s (0:07:07) 12:32:09 pytest -q transportpce_tests/hybrid/test03_autonomous_reroute.py 12:32:57 ..................................................... [100%] 12:36:30 53 passed in 260.14s (0:04:20) 12:36:30 tests_hybrid: OK ✔ in 14 minutes 7.51 seconds 12:36:30 buildlighty: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 12:36:36 buildlighty: freeze> python -m pip freeze --all 12:36:37 buildlighty: bcrypt==4.3.0,certifi==2025.6.15,cffi==1.17.1,charset-normalizer==3.4.2,cryptography==45.0.5,dict2xml==1.7.6,idna==3.10,iniconfig==2.1.0,lxml==5.4.0,netconf-client==3.2.0,packaging==25.0,paramiko==3.5.1,pip==25.1.1,pluggy==1.6.0,psutil==7.0.0,pycparser==2.22,Pygments==2.19.2,PyNaCl==1.5.0,pytest==8.4.1,requests==2.32.4,setuptools==80.3.1,urllib3==2.5.0 12:36:37 buildlighty: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/lighty> ./build.sh 12:36:37 NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED 12:36:58 buildcontroller: OK (113.88=setup[7.90]+cmd[105.97] seconds) 12:36:58 testsPCE: OK (305.75=setup[59.13]+cmd[246.62] seconds) 12:36:58 sims: OK (12.09=setup[7.55]+cmd[4.54] seconds) 12:36:58 build_karaf_tests121: OK (50.35=setup[7.64]+cmd[42.71] seconds) 12:36:58 tests121: OK (1085.27=setup[6.80]+cmd[1078.47] seconds) 12:36:58 build_karaf_tests221: OK (51.30=setup[7.51]+cmd[43.79] seconds) 12:36:58 tests_tapi: FAIL code 1 (149.07=setup[6.64]+cmd[142.43] seconds) 12:36:58 tests221: FAIL code 1 (82.71=setup[6.53]+cmd[76.18] seconds) 12:36:58 build_karaf_tests71: OK (50.37=setup[13.33]+cmd[37.04] seconds) 12:36:58 tests71: OK (415.40=setup[6.85]+cmd[408.56] seconds) 12:36:58 build_karaf_tests190: OK (46.89=setup[7.15]+cmd[39.74] seconds) 12:36:58 tests190: OK (162.14=setup[8.47]+cmd[153.67] seconds) 12:36:58 build_karaf_tests_hybrid: OK (47.60=setup[6.61]+cmd[40.99] seconds) 12:36:58 tests_hybrid: OK (847.51=setup[7.40]+cmd[840.10] seconds) 12:36:58 buildlighty: OK (27.81=setup[6.94]+cmd[20.88] seconds) 12:36:58 docs: OK (34.01=setup[31.25]+cmd[2.77] seconds) 12:36:58 docs-linkcheck: OK (34.90=setup[30.86]+cmd[4.04] seconds) 12:36:58 checkbashisms: OK (3.13=setup[1.86]+cmd[0.01,0.05,1.21] seconds) 12:36:58 pre-commit: FAIL code 1 (50.06=setup[2.76]+cmd[0.00,0.01,39.50,7.78] seconds) 12:36:58 pylint: OK (28.59=setup[3.56]+cmd[25.03] seconds) 12:36:58 evaluation failed :( (3027.50 seconds) 12:36:58 + tox_status=255 12:36:58 + echo '---> Completed tox runs' 12:36:58 ---> Completed tox runs 12:36:58 + for i in .tox/*/log 12:36:58 ++ echo .tox/build_karaf_tests121/log 12:36:58 ++ awk -F/ '{print $2}' 12:36:58 + tox_env=build_karaf_tests121 12:36:58 + cp -r .tox/build_karaf_tests121/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests121 12:36:58 + for i in .tox/*/log 12:36:58 ++ echo .tox/build_karaf_tests190/log 12:36:58 ++ awk -F/ '{print $2}' 12:36:58 + tox_env=build_karaf_tests190 12:36:58 + cp -r .tox/build_karaf_tests190/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests190 12:36:58 + for i in .tox/*/log 12:36:58 ++ echo .tox/build_karaf_tests221/log 12:36:58 ++ awk -F/ '{print $2}' 12:36:58 + tox_env=build_karaf_tests221 12:36:58 + cp -r .tox/build_karaf_tests221/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests221 12:36:58 + for i in .tox/*/log 12:36:58 ++ echo .tox/build_karaf_tests71/log 12:36:58 ++ awk -F/ '{print $2}' 12:36:58 + tox_env=build_karaf_tests71 12:36:58 + cp -r .tox/build_karaf_tests71/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests71 12:36:58 + for i in .tox/*/log 12:36:58 ++ echo .tox/build_karaf_tests_hybrid/log 12:36:58 ++ awk -F/ '{print $2}' 12:36:58 + tox_env=build_karaf_tests_hybrid 12:36:58 + cp -r .tox/build_karaf_tests_hybrid/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests_hybrid 12:36:58 + for i in .tox/*/log 12:36:58 ++ echo .tox/buildcontroller/log 12:36:58 ++ awk -F/ '{print $2}' 12:36:58 + tox_env=buildcontroller 12:36:58 + cp -r .tox/buildcontroller/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/buildcontroller 12:36:58 + for i in .tox/*/log 12:36:58 ++ echo .tox/buildlighty/log 12:36:58 ++ awk -F/ '{print $2}' 12:36:58 + tox_env=buildlighty 12:36:58 + cp -r .tox/buildlighty/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/buildlighty 12:36:58 + for i in .tox/*/log 12:36:58 ++ echo .tox/checkbashisms/log 12:36:58 ++ awk -F/ '{print $2}' 12:36:58 + tox_env=checkbashisms 12:36:58 + cp -r .tox/checkbashisms/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/checkbashisms 12:36:58 + for i in .tox/*/log 12:36:58 ++ echo .tox/docs-linkcheck/log 12:36:58 ++ awk -F/ '{print $2}' 12:36:58 + tox_env=docs-linkcheck 12:36:58 + cp -r .tox/docs-linkcheck/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/docs-linkcheck 12:36:58 + for i in .tox/*/log 12:36:58 ++ echo .tox/docs/log 12:36:58 ++ awk -F/ '{print $2}' 12:36:58 + tox_env=docs 12:36:58 + cp -r .tox/docs/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/docs 12:36:58 + for i in .tox/*/log 12:36:58 ++ echo .tox/pre-commit/log 12:36:58 ++ awk -F/ '{print $2}' 12:36:58 + tox_env=pre-commit 12:36:58 + cp -r .tox/pre-commit/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/pre-commit 12:36:58 + for i in .tox/*/log 12:36:58 ++ awk -F/ '{print $2}' 12:36:58 ++ echo .tox/pylint/log 12:36:58 + tox_env=pylint 12:36:58 + cp -r .tox/pylint/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/pylint 12:36:58 + for i in .tox/*/log 12:36:58 ++ echo .tox/sims/log 12:36:58 ++ awk -F/ '{print $2}' 12:36:58 + tox_env=sims 12:36:58 + cp -r .tox/sims/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/sims 12:36:58 + for i in .tox/*/log 12:36:58 ++ echo .tox/tests121/log 12:36:58 ++ awk -F/ '{print $2}' 12:36:58 + tox_env=tests121 12:36:58 + cp -r .tox/tests121/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests121 12:36:58 + for i in .tox/*/log 12:36:58 ++ echo .tox/tests190/log 12:36:58 ++ awk -F/ '{print $2}' 12:36:58 + tox_env=tests190 12:36:58 + cp -r .tox/tests190/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests190 12:36:58 + for i in .tox/*/log 12:36:58 ++ echo .tox/tests221/log 12:36:58 ++ awk -F/ '{print $2}' 12:36:58 + tox_env=tests221 12:36:58 + cp -r .tox/tests221/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests221 12:36:58 + for i in .tox/*/log 12:36:58 ++ echo .tox/tests71/log 12:36:58 ++ awk -F/ '{print $2}' 12:36:58 + tox_env=tests71 12:36:58 + cp -r .tox/tests71/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests71 12:36:58 + for i in .tox/*/log 12:36:58 ++ echo .tox/testsPCE/log 12:36:58 ++ awk -F/ '{print $2}' 12:36:58 + tox_env=testsPCE 12:36:58 + cp -r .tox/testsPCE/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/testsPCE 12:36:58 + for i in .tox/*/log 12:36:58 ++ echo .tox/tests_hybrid/log 12:36:58 ++ awk -F/ '{print $2}' 12:36:58 + tox_env=tests_hybrid 12:36:58 + cp -r .tox/tests_hybrid/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests_hybrid 12:36:58 + for i in .tox/*/log 12:36:58 ++ echo .tox/tests_tapi/log 12:36:58 ++ awk -F/ '{print $2}' 12:36:58 + tox_env=tests_tapi 12:36:58 + cp -r .tox/tests_tapi/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests_tapi 12:36:58 + DOC_DIR=docs/_build/html 12:36:58 + [[ -d docs/_build/html ]] 12:36:58 + echo '---> Archiving generated docs' 12:36:58 ---> Archiving generated docs 12:36:58 + mv docs/_build/html /w/workspace/transportpce-tox-verify-transportpce-master/archives/docs 12:36:58 + echo '---> tox-run.sh ends' 12:36:58 ---> tox-run.sh ends 12:36:58 + test 255 -eq 0 12:36:58 + exit 255 12:36:58 ++ '[' 1 = 1 ']' 12:36:58 ++ '[' -x /usr/bin/clear_console ']' 12:36:58 ++ /usr/bin/clear_console -q 12:36:58 Build step 'Execute shell' marked build as failure 12:36:58 $ ssh-agent -k 12:36:58 unset SSH_AUTH_SOCK; 12:36:58 unset SSH_AGENT_PID; 12:36:58 echo Agent pid 16163 killed; 12:36:58 [ssh-agent] Stopped. 12:36:58 [PostBuildScript] - [INFO] Executing post build scripts. 12:36:58 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins6980193053814657055.sh 12:36:58 ---> sysstat.sh 12:36:59 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins9214139380994033268.sh 12:36:59 ---> package-listing.sh 12:36:59 ++ facter osfamily 12:36:59 ++ tr '[:upper:]' '[:lower:]' 12:36:59 + OS_FAMILY=debian 12:36:59 + workspace=/w/workspace/transportpce-tox-verify-transportpce-master 12:36:59 + START_PACKAGES=/tmp/packages_start.txt 12:36:59 + END_PACKAGES=/tmp/packages_end.txt 12:36:59 + DIFF_PACKAGES=/tmp/packages_diff.txt 12:36:59 + PACKAGES=/tmp/packages_start.txt 12:36:59 + '[' /w/workspace/transportpce-tox-verify-transportpce-master ']' 12:36:59 + PACKAGES=/tmp/packages_end.txt 12:36:59 + case "${OS_FAMILY}" in 12:36:59 + grep '^ii' 12:36:59 + dpkg -l 12:36:59 + '[' -f /tmp/packages_start.txt ']' 12:36:59 + '[' -f /tmp/packages_end.txt ']' 12:36:59 + diff /tmp/packages_start.txt /tmp/packages_end.txt 12:36:59 + '[' /w/workspace/transportpce-tox-verify-transportpce-master ']' 12:36:59 + mkdir -p /w/workspace/transportpce-tox-verify-transportpce-master/archives/ 12:36:59 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/transportpce-tox-verify-transportpce-master/archives/ 12:36:59 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins16142282663271573356.sh 12:36:59 ---> capture-instance-metadata.sh 12:36:59 Setup pyenv: 12:36:59 system 12:36:59 3.8.20 12:36:59 3.9.20 12:36:59 3.10.15 12:36:59 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 12:36:59 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-WFYk from file:/tmp/.os_lf_venv 12:37:01 lf-activate-venv(): INFO: Installing: lftools 12:37:17 lf-activate-venv(): INFO: Adding /tmp/venv-WFYk/bin to PATH 12:37:17 INFO: Running in OpenStack, capturing instance metadata 12:37:17 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins8899301579303954640.sh 12:37:17 provisioning config files... 12:37:17 Could not find credentials [logs] for transportpce-tox-verify-transportpce-master #3331 12:37:17 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/transportpce-tox-verify-transportpce-master@tmp/config15800502744294651573tmp 12:37:17 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[odl-logs-s3-cloudfront-index] 12:37:17 Run condition [Regular expression match] enabling perform for step [Provide Configuration files] 12:37:17 provisioning config files... 12:37:18 copy managed file [jenkins-s3-log-ship] to file:/home/jenkins/.aws/credentials 12:37:18 [EnvInject] - Injecting environment variables from a build step. 12:37:18 [EnvInject] - Injecting as environment variables the properties content 12:37:18 SERVER_ID=logs 12:37:18 12:37:18 [EnvInject] - Variables injected successfully. 12:37:18 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins2353449592926879485.sh 12:37:18 ---> create-netrc.sh 12:37:18 WARN: Log server credential not found. 12:37:18 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins949007012696164856.sh 12:37:18 ---> python-tools-install.sh 12:37:18 Setup pyenv: 12:37:18 system 12:37:18 3.8.20 12:37:18 3.9.20 12:37:18 3.10.15 12:37:18 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 12:37:18 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-WFYk from file:/tmp/.os_lf_venv 12:37:20 lf-activate-venv(): INFO: Installing: lftools 12:37:31 lf-activate-venv(): INFO: Adding /tmp/venv-WFYk/bin to PATH 12:37:31 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins14646871985905339496.sh 12:37:31 ---> sudo-logs.sh 12:37:31 Archiving 'sudo' log.. 12:37:32 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins974831160386019614.sh 12:37:32 ---> job-cost.sh 12:37:32 Setup pyenv: 12:37:32 system 12:37:32 3.8.20 12:37:32 3.9.20 12:37:32 3.10.15 12:37:32 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 12:37:32 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-WFYk from file:/tmp/.os_lf_venv 12:37:34 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 12:37:41 lf-activate-venv(): INFO: Adding /tmp/venv-WFYk/bin to PATH 12:37:41 INFO: No Stack... 12:37:41 INFO: Retrieving Pricing Info for: v3-standard-4 12:37:41 INFO: Archiving Costs 12:37:41 [transportpce-tox-verify-transportpce-master] $ /bin/bash -l /tmp/jenkins2653979097283020408.sh 12:37:41 ---> logs-deploy.sh 12:37:41 Setup pyenv: 12:37:41 system 12:37:41 3.8.20 12:37:41 3.9.20 12:37:41 3.10.15 12:37:41 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 12:37:42 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-WFYk from file:/tmp/.os_lf_venv 12:37:44 lf-activate-venv(): INFO: Installing: lftools 12:37:55 lf-activate-venv(): INFO: Adding /tmp/venv-WFYk/bin to PATH 12:37:55 WARNING: Nexus logging server not set 12:37:55 INFO: S3 path logs/releng/vex-yul-odl-jenkins-1/transportpce-tox-verify-transportpce-master/3331/ 12:37:55 INFO: archiving logs to S3 12:37:57 ---> uname -a: 12:37:57 Linux prd-ubuntu2204-docker-4c-16g-39075 5.15.0-131-generic #141-Ubuntu SMP Fri Jan 10 21:18:28 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux 12:37:57 12:37:57 12:37:57 ---> lscpu: 12:37:57 Architecture: x86_64 12:37:57 CPU op-mode(s): 32-bit, 64-bit 12:37:57 Address sizes: 40 bits physical, 48 bits virtual 12:37:57 Byte Order: Little Endian 12:37:57 CPU(s): 4 12:37:57 On-line CPU(s) list: 0-3 12:37:57 Vendor ID: AuthenticAMD 12:37:57 Model name: AMD EPYC-Rome Processor 12:37:57 CPU family: 23 12:37:57 Model: 49 12:37:57 Thread(s) per core: 1 12:37:57 Core(s) per socket: 1 12:37:57 Socket(s): 4 12:37:57 Stepping: 0 12:37:57 BogoMIPS: 5599.99 12:37:57 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr wbnoinvd arat npt nrip_save umip rdpid arch_capabilities 12:37:57 Virtualization: AMD-V 12:37:57 Hypervisor vendor: KVM 12:37:57 Virtualization type: full 12:37:57 L1d cache: 128 KiB (4 instances) 12:37:57 L1i cache: 128 KiB (4 instances) 12:37:57 L2 cache: 2 MiB (4 instances) 12:37:57 L3 cache: 64 MiB (4 instances) 12:37:57 NUMA node(s): 1 12:37:57 NUMA node0 CPU(s): 0-3 12:37:57 Vulnerability Gather data sampling: Not affected 12:37:57 Vulnerability Itlb multihit: Not affected 12:37:57 Vulnerability L1tf: Not affected 12:37:57 Vulnerability Mds: Not affected 12:37:57 Vulnerability Meltdown: Not affected 12:37:57 Vulnerability Mmio stale data: Not affected 12:37:57 Vulnerability Reg file data sampling: Not affected 12:37:57 Vulnerability Retbleed: Mitigation; untrained return thunk; SMT disabled 12:37:57 Vulnerability Spec rstack overflow: Mitigation; SMT disabled 12:37:57 Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp 12:37:57 Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization 12:37:57 Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected 12:37:57 Vulnerability Srbds: Not affected 12:37:57 Vulnerability Tsx async abort: Not affected 12:37:57 12:37:57 12:37:57 ---> nproc: 12:37:57 4 12:37:57 12:37:57 12:37:57 ---> df -h: 12:37:57 Filesystem Size Used Avail Use% Mounted on 12:37:57 tmpfs 1.6G 1.1M 1.6G 1% /run 12:37:57 /dev/vda1 78G 18G 60G 23% / 12:37:57 tmpfs 7.9G 0 7.9G 0% /dev/shm 12:37:57 tmpfs 5.0M 0 5.0M 0% /run/lock 12:37:57 /dev/vda15 105M 6.1M 99M 6% /boot/efi 12:37:57 tmpfs 1.6G 4.0K 1.6G 1% /run/user/1001 12:37:57 12:37:57 12:37:57 ---> free -m: 12:37:57 total used free shared buff/cache available 12:37:57 Mem: 15989 702 4543 3 10743 14944 12:37:57 Swap: 1023 0 1023 12:37:57 12:37:57 12:37:57 ---> ip addr: 12:37:57 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 12:37:57 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 12:37:57 inet 127.0.0.1/8 scope host lo 12:37:57 valid_lft forever preferred_lft forever 12:37:57 inet6 ::1/128 scope host 12:37:57 valid_lft forever preferred_lft forever 12:37:57 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 12:37:57 link/ether fa:16:3e:01:31:29 brd ff:ff:ff:ff:ff:ff 12:37:57 altname enp0s3 12:37:57 inet 10.30.171.185/23 metric 100 brd 10.30.171.255 scope global dynamic ens3 12:37:57 valid_lft 83938sec preferred_lft 83938sec 12:37:57 inet6 fe80::f816:3eff:fe01:3129/64 scope link 12:37:57 valid_lft forever preferred_lft forever 12:37:57 3: docker0: mtu 1458 qdisc noqueue state DOWN group default 12:37:57 link/ether 02:42:82:62:55:76 brd ff:ff:ff:ff:ff:ff 12:37:57 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 12:37:57 valid_lft forever preferred_lft forever 12:37:57 12:37:57 12:37:57 ---> sar -b -r -n DEV: 12:37:57 Linux 5.15.0-131-generic (prd-ubuntu2204-docker-4c-16g-39075) 07/04/25 _x86_64_ (4 CPU) 12:37:57 12:37:57 00:00:08 tps rtps wtps dtps bread/s bwrtn/s bdscd/s 12:37:57 00:10:08 2.16 0.00 2.09 0.07 0.00 35.08 971.17 12:37:57 00:20:08 1.76 0.00 1.75 0.01 0.00 18.51 0.05 12:37:57 00:30:08 1.81 0.00 1.80 0.01 0.00 19.27 0.07 12:37:57 00:40:08 1.76 0.00 1.75 0.01 0.00 18.69 0.05 12:37:57 00:50:08 1.87 0.00 1.86 0.01 0.00 19.91 0.07 12:37:57 01:00:08 1.83 0.00 1.82 0.01 0.00 19.55 0.07 12:37:57 01:10:08 1.75 0.00 1.74 0.01 0.00 18.59 0.05 12:37:57 01:20:08 1.74 0.00 1.74 0.01 0.00 18.60 0.07 12:37:57 01:30:08 1.75 0.00 1.75 0.01 0.00 18.69 0.05 12:37:57 01:40:01 1.74 0.00 1.73 0.01 0.00 18.51 0.04 12:37:57 01:50:08 1.82 0.00 1.82 0.00 0.00 19.13 0.00 12:37:57 02:00:08 1.70 0.00 1.70 0.00 0.00 17.83 0.00 12:37:57 02:10:08 1.75 0.00 1.75 0.00 0.00 18.77 0.00 12:37:57 02:20:08 1.80 0.00 1.80 0.00 0.00 18.95 0.00 12:37:57 02:30:08 1.78 0.00 1.77 0.00 0.00 18.99 0.48 12:37:57 02:40:08 1.70 0.00 1.70 0.00 0.00 18.29 0.00 12:37:57 02:50:08 1.64 0.00 1.64 0.00 0.00 17.49 0.00 12:37:57 03:00:08 1.69 0.00 1.69 0.00 0.00 18.08 0.00 12:37:57 03:10:08 1.61 0.00 1.61 0.00 0.00 17.32 0.00 12:37:57 03:20:08 1.72 0.00 1.72 0.00 0.00 18.16 0.00 12:37:57 03:30:08 1.73 0.00 1.73 0.00 0.00 18.49 0.00 12:37:57 03:40:08 1.65 0.00 1.65 0.00 0.00 17.52 0.00 12:37:57 03:50:08 1.61 0.00 1.61 0.00 0.00 17.33 0.00 12:37:57 04:00:08 1.78 0.00 1.78 0.00 0.00 18.72 0.00 12:37:57 04:10:08 1.70 0.00 1.70 0.00 0.00 17.89 0.00 12:37:57 04:20:08 1.81 0.00 1.81 0.00 0.00 19.11 0.00 12:37:57 04:30:08 1.60 0.00 1.60 0.00 0.00 17.15 0.00 12:37:57 04:40:08 1.85 0.00 1.85 0.00 0.00 19.49 0.00 12:37:57 04:50:08 1.78 0.00 1.78 0.00 0.00 18.85 0.00 12:37:57 05:00:08 1.63 0.00 1.63 0.00 0.00 17.29 0.00 12:37:57 05:10:08 1.80 0.00 1.80 0.00 0.00 18.85 0.00 12:37:57 05:20:08 1.82 0.00 1.82 0.00 0.00 19.08 0.00 12:37:57 05:30:08 1.74 0.00 1.74 0.00 0.00 18.29 0.00 12:37:57 05:40:08 1.80 0.00 1.80 0.00 0.00 19.08 0.00 12:37:57 05:50:08 1.71 0.00 1.71 0.00 0.00 18.09 0.00 12:37:57 06:00:08 1.76 0.00 1.76 0.00 0.00 18.63 0.00 12:37:57 06:10:08 1.73 0.00 1.73 0.00 0.00 18.41 0.00 12:37:57 06:20:08 1.73 0.00 1.73 0.00 0.00 18.29 0.00 12:37:57 06:30:08 1.82 0.00 1.82 0.00 0.00 19.23 0.00 12:37:57 06:40:08 1.68 0.00 1.68 0.00 0.00 17.64 0.00 12:37:57 06:50:08 1.76 0.00 1.76 0.00 0.00 18.51 0.00 12:37:57 07:00:08 1.80 0.00 1.80 0.00 0.00 18.97 0.00 12:37:57 07:10:08 1.71 0.00 1.71 0.00 0.00 18.00 0.00 12:37:57 07:20:08 1.86 0.00 1.86 0.00 0.00 19.52 0.00 12:37:57 07:30:08 1.75 0.00 1.75 0.00 0.00 18.36 0.00 12:37:57 07:40:08 1.76 0.00 1.76 0.00 0.00 18.52 0.00 12:37:57 07:50:08 1.81 0.00 1.81 0.00 0.00 19.09 0.00 12:37:57 08:00:08 1.67 0.00 1.67 0.00 0.00 17.59 0.00 12:37:57 08:10:08 1.78 0.00 1.78 0.00 0.00 18.84 0.00 12:37:57 08:20:08 1.77 0.00 1.77 0.00 0.00 18.84 0.00 12:37:57 08:30:08 1.74 0.00 1.74 0.00 0.00 18.33 0.00 12:37:57 08:40:08 1.65 0.00 1.65 0.00 0.00 17.83 0.00 12:37:57 08:50:08 1.66 0.00 1.66 0.00 0.00 17.65 0.00 12:37:57 09:00:08 1.63 0.00 1.63 0.00 0.00 17.57 0.00 12:37:57 09:10:08 1.80 0.00 1.80 0.00 0.00 19.15 0.00 12:37:57 09:20:08 1.68 0.00 1.68 0.00 0.00 17.92 0.00 12:37:57 09:30:08 1.70 0.00 1.70 0.00 0.00 18.08 0.00 12:37:57 09:40:08 1.70 0.00 1.70 0.00 0.00 18.36 0.09 12:37:57 09:50:08 1.78 0.00 1.78 0.00 0.00 18.73 0.00 12:37:57 10:00:08 1.68 0.00 1.68 0.00 0.00 17.83 0.00 12:37:57 10:10:08 1.66 0.00 1.66 0.00 0.00 17.73 0.00 12:37:57 10:20:07 1.91 0.00 1.91 0.01 0.00 20.57 0.60 12:37:57 10:30:08 1.70 0.00 1.70 0.00 0.00 18.06 0.00 12:37:57 10:40:08 1.69 0.00 1.69 0.00 0.00 17.73 0.00 12:37:57 10:50:08 1.83 0.00 1.83 0.00 0.36 19.23 0.00 12:37:57 11:00:08 1.62 0.00 1.62 0.00 0.00 17.25 0.00 12:37:57 11:10:08 1.72 0.00 1.72 0.00 0.00 18.28 0.00 12:37:57 11:20:08 1.67 0.00 1.67 0.00 0.00 17.87 0.00 12:37:57 11:30:08 1.77 0.00 1.77 0.00 0.00 18.73 0.00 12:37:57 11:40:08 1.76 0.00 1.76 0.00 0.00 18.47 0.00 12:37:57 11:50:08 106.68 5.12 97.03 4.54 277.31 34946.81 11993.59 12:37:57 12:00:08 35.76 1.62 32.34 1.80 89.29 3736.76 3856.51 12:37:57 12:10:08 20.26 0.07 19.17 1.03 1.73 1665.85 1560.55 12:37:57 12:20:08 7.96 0.07 7.61 0.29 1.29 148.89 238.91 12:37:57 12:30:08 10.40 0.01 9.87 0.53 0.08 1163.89 223.33 12:37:57 Average: 4.04 0.09 3.84 0.11 4.93 572.93 251.28 12:37:57 12:37:57 00:00:08 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 12:37:57 00:10:08 13268780 15496704 508496 3.11 98076 2347856 1150588 6.60 493764 2330836 60 12:37:57 00:20:08 13268780 15496788 508424 3.11 98148 2347868 1150588 6.60 493836 2330864 32 12:37:57 00:30:08 13268528 15496572 508672 3.11 98180 2347876 1150588 6.60 493876 2330908 188 12:37:57 00:40:08 13268528 15496640 508512 3.11 98236 2347892 1150636 6.60 493928 2331040 44 12:37:57 00:50:08 13268304 15496488 508692 3.11 98300 2347900 1150588 6.60 493984 2330896 28 12:37:57 01:00:08 13268056 15496496 508664 3.11 98352 2348096 1150588 6.60 494040 2331108 32 12:37:57 01:10:08 13268056 15496696 508400 3.11 98408 2348236 1150588 6.60 494100 2331252 36 12:37:57 01:20:08 13267804 15496512 508612 3.11 98472 2348240 1150588 6.60 494164 2331324 216 12:37:57 01:30:08 13267580 15496344 508744 3.11 98508 2348256 1150588 6.60 494200 2331280 4 12:37:57 01:40:01 13267580 15496384 508640 3.11 98552 2348256 1150636 6.60 494236 2331480 4 12:37:57 01:50:08 13267116 15496148 508936 3.11 98632 2348396 1150588 6.60 494324 2331424 32 12:37:57 02:00:08 13266364 15495468 509536 3.11 98688 2348408 1150652 6.60 494384 2331580 188 12:37:57 02:10:08 13265608 15494800 510232 3.12 98752 2348420 1150652 6.60 494444 2331452 32 12:37:57 02:20:08 13265384 15494776 510268 3.12 98816 2348556 1150652 6.60 494512 2331704 48 12:37:57 02:30:08 13265160 15494608 510396 3.12 98864 2348564 1150652 6.60 494556 2332680 28 12:37:57 02:40:08 13265692 15495196 509808 3.11 98892 2348592 1150652 6.60 494584 2331632 196 12:37:57 02:50:08 13266448 15496000 509084 3.11 98940 2348596 1150652 6.60 494632 2331624 4 12:37:57 03:00:08 13265944 15495668 509340 3.11 98972 2348736 1150652 6.60 494664 2331776 32 12:37:57 03:10:08 13265944 15495720 509268 3.11 99020 2348740 1150652 6.60 494712 2331824 272 12:37:57 03:20:08 13265440 15495244 509704 3.11 99048 2348756 1150652 6.60 494744 2332048 188 12:37:57 03:30:08 13265440 15495444 509540 3.11 99112 2348892 1150652 6.60 494804 2331984 32 12:37:57 03:40:08 13265188 15495240 509760 3.11 99148 2348904 1150652 6.60 494840 2331804 40 12:37:57 03:50:08 13264940 15495056 510004 3.11 99200 2348912 1150652 6.60 494892 2332000 228 12:37:57 04:00:08 13266032 15496324 508732 3.11 99240 2349048 1150652 6.60 494932 2332148 28 12:37:57 04:10:08 13265948 15496276 508712 3.11 99268 2349056 1150652 6.60 494960 2332160 32 12:37:57 04:20:08 13265948 15496356 508568 3.11 99336 2349064 1150700 6.60 495028 2332208 0 12:37:57 04:30:08 13265696 15496164 508716 3.11 99384 2349076 1150700 6.60 495076 2332448 244 12:37:57 04:40:08 13264768 15495400 509464 3.11 99420 2349208 1150652 6.60 495108 2332412 12 12:37:57 04:50:08 13264768 15495468 509324 3.11 99468 2349224 1150720 6.61 495160 2332456 28 12:37:57 05:00:08 13264768 15495520 509288 3.11 99512 2349232 1150720 6.61 495204 2332488 220 12:37:57 05:10:08 13264768 15495696 509176 3.11 99552 2349368 1150720 6.61 495244 2332776 36 12:37:57 05:20:08 13264768 15495784 508980 3.11 99600 2349400 1150720 6.61 495292 2332660 44 12:37:57 05:30:08 13264520 15495632 509240 3.11 99672 2349424 1150720 6.61 495364 2332684 32 12:37:57 05:40:08 13264296 15495444 509388 3.11 99696 2349436 1150720 6.61 495388 2332700 24 12:37:57 05:50:08 13262280 15493596 511156 3.12 99732 2349568 1150784 6.61 495424 2332872 16 12:37:57 06:00:08 13262280 15493648 511320 3.12 99772 2349580 1150784 6.61 495464 2332884 32 12:37:57 06:10:08 13262032 15493428 511548 3.12 99792 2349592 1150784 6.61 495484 2332892 28 12:37:57 06:20:08 13262032 15493608 511284 3.12 99832 2349728 1150784 6.61 495524 2333032 28 12:37:57 06:30:08 13262032 15493660 511200 3.12 99868 2349740 1150784 6.61 495560 2333048 16 12:37:57 06:40:08 13261784 15493436 511312 3.12 99876 2349752 1150784 6.61 495568 2333212 32 12:37:57 06:50:08 13261784 15493472 511228 3.12 99908 2349760 1150784 6.61 495600 2333256 172 12:37:57 07:00:08 13261532 15493396 511404 3.12 99940 2349892 1150832 6.61 495632 2333416 20 12:37:57 07:10:08 13261532 15493448 511388 3.12 99984 2349952 1150784 6.61 495676 2333224 24 12:37:57 07:20:08 13261284 15493284 511544 3.12 100008 2349964 1150784 6.61 495700 2333436 184 12:37:57 07:30:08 13261284 15493460 511352 3.12 100048 2350104 1150784 6.61 495744 2333464 36 12:37:57 07:40:08 13261284 15493500 511256 3.12 100084 2350108 1150784 6.61 495776 2333444 32 12:37:57 07:50:08 13261284 15493532 511076 3.12 100108 2350120 1150816 6.61 495800 2333620 32 12:37:57 08:00:08 13261036 15493316 511516 3.12 100128 2350132 1150784 6.61 495820 2333468 28 12:37:57 08:10:08 13260280 15492728 512116 3.13 100164 2350264 1150784 6.61 495856 2333600 32 12:37:57 08:20:08 13260112 15492608 512252 3.13 100196 2350276 1150784 6.61 495888 2333628 52 12:37:57 08:30:08 13259424 15491956 512792 3.13 100228 2350280 1150768 6.61 495920 2333612 28 12:37:57 08:40:08 13259000 15491712 512968 3.13 100264 2350420 1150768 6.61 495956 2333732 216 12:37:57 08:50:08 13258972 15491768 512840 3.13 100340 2350428 1150784 6.61 496032 2333836 28 12:37:57 09:00:08 13258496 15491324 513380 3.14 100364 2350436 1150784 6.61 496056 2333832 212 12:37:57 09:10:08 13257268 15490268 514432 3.14 100400 2350576 1150784 6.61 496092 2333980 44 12:37:57 09:20:08 13256008 15489052 515600 3.15 100416 2350588 1150784 6.61 496108 2333996 32 12:37:57 09:30:08 13257924 15491012 513600 3.14 100436 2350596 1150784 6.61 496128 2334004 32 12:37:57 09:40:08 13258484 15491624 513044 3.13 100468 2350608 1150816 6.61 496160 2335180 36 12:37:57 09:50:08 13258232 15491540 513224 3.13 100500 2350740 1150784 6.61 496192 2334060 32 12:37:57 10:00:08 13260416 15493784 510952 3.12 100540 2350752 1150784 6.61 496232 2334208 184 12:37:57 10:10:08 13260952 15494344 510400 3.12 100552 2350760 1150784 6.61 496244 2334076 32 12:37:57 10:20:07 13244744 15478244 526452 3.22 100604 2350832 1150784 6.61 496304 2334152 32 12:37:57 10:30:08 13244200 15477980 526644 3.22 100620 2351104 1150784 6.61 496320 2334420 232 12:37:57 10:40:08 13244056 15477876 526732 3.22 100648 2351108 1150784 6.61 496348 2334428 212 12:37:57 10:50:08 13243640 15477596 527092 3.22 100668 2351224 1150784 6.61 496368 2334644 32 12:37:57 11:00:08 13243756 15477748 526900 3.22 100692 2351236 1150784 6.61 496392 2334556 32 12:37:57 11:10:08 13244104 15478268 526264 3.21 100732 2351368 1150784 6.61 496432 2334688 28 12:37:57 11:20:08 13244632 15478832 525752 3.21 100752 2351388 1150784 6.61 496452 2334704 204 12:37:57 11:30:08 13245084 15479316 525280 3.21 100776 2351388 1150784 6.61 496476 2334712 32 12:37:57 11:40:08 13244864 15479216 525320 3.21 100788 2351528 1150784 6.61 496488 2334808 32 12:37:57 11:50:08 3829868 14335868 1587300 9.69 261304 10031128 2379604 13.66 2184500 9593000 70028 12:37:57 12:00:08 4022840 12996252 2925968 17.87 273908 8481644 3652360 20.96 2491924 9084456 284 12:37:57 12:10:08 3604676 12937652 2983432 18.22 283080 8816972 3772632 21.65 2617344 9368504 136 12:37:57 12:20:08 1955452 11292092 4627928 28.27 283292 8820400 5312108 30.49 2619356 10997788 120 12:37:57 12:30:08 1714832 11332824 4586528 28.01 289772 9083468 5291000 30.37 2707744 11134648 124 12:37:57 Average: 12578783 15298151 701135 4.28 111534 2796053 1346108 7.73 630574 2846376 1011 12:37:57 12:37:57 00:00:08 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 12:37:57 00:10:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 00:10:08 ens3 0.30 0.11 0.07 0.04 0.00 0.00 0.00 0.00 12:37:57 00:10:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 00:20:08 lo 0.05 0.05 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 00:20:08 ens3 0.25 0.10 0.06 0.04 0.00 0.00 0.00 0.00 12:37:57 00:20:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 00:30:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 00:30:08 ens3 0.49 0.19 0.15 0.10 0.00 0.00 0.00 0.00 12:37:57 00:30:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 00:40:08 lo 0.06 0.06 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 00:40:08 ens3 3.10 0.34 0.60 0.24 0.00 0.00 0.00 0.00 12:37:57 00:40:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 00:50:08 lo 0.06 0.06 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 00:50:08 ens3 1.97 0.19 0.29 0.12 0.00 0.00 0.00 0.00 12:37:57 00:50:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 01:00:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 01:00:08 ens3 0.64 0.14 0.11 0.06 0.00 0.00 0.00 0.00 12:37:57 01:00:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 01:10:08 lo 0.05 0.05 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 01:10:08 ens3 0.50 0.13 0.11 0.06 0.00 0.00 0.00 0.00 12:37:57 01:10:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 01:20:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 01:20:08 ens3 0.40 0.15 0.09 0.06 0.00 0.00 0.00 0.00 12:37:57 01:20:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 01:30:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 01:30:08 ens3 0.43 0.12 0.10 0.06 0.00 0.00 0.00 0.00 12:37:57 01:30:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 01:40:01 lo 0.05 0.05 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 01:40:01 ens3 0.72 0.21 0.19 0.11 0.00 0.00 0.00 0.00 12:37:57 01:40:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 01:50:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 01:50:08 ens3 2.55 0.26 0.45 0.17 0.00 0.00 0.00 0.00 12:37:57 01:50:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 02:00:08 lo 0.05 0.05 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 02:00:08 ens3 1.79 0.14 0.23 0.08 0.00 0.00 0.00 0.00 12:37:57 02:00:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 02:10:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 02:10:08 ens3 0.56 0.10 0.08 0.04 0.00 0.00 0.00 0.00 12:37:57 02:10:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 02:20:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 02:20:08 ens3 0.39 0.12 0.07 0.04 0.00 0.00 0.00 0.00 12:37:57 02:20:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 02:30:08 lo 0.06 0.06 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 02:30:08 ens3 0.31 0.09 0.08 0.03 0.00 0.00 0.00 0.00 12:37:57 02:30:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 02:40:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 02:40:08 ens3 0.52 0.17 0.13 0.08 0.00 0.00 0.00 0.00 12:37:57 02:40:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 02:50:08 lo 0.05 0.05 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 02:50:08 ens3 0.49 0.16 0.14 0.08 0.00 0.00 0.00 0.00 12:37:57 02:50:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 03:00:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 03:00:08 ens3 0.29 0.07 0.04 0.02 0.00 0.00 0.00 0.00 12:37:57 03:00:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 03:10:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 03:10:08 ens3 0.29 0.08 0.04 0.02 0.00 0.00 0.00 0.00 12:37:57 03:10:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 03:20:08 lo 0.05 0.05 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 03:20:08 ens3 0.28 0.09 0.06 0.04 0.00 0.00 0.00 0.00 12:37:57 03:20:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 03:30:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 03:30:08 ens3 0.47 0.11 0.09 0.04 0.00 0.00 0.00 0.00 12:37:57 03:30:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 03:40:08 lo 0.06 0.06 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 03:40:08 ens3 2.77 0.20 0.42 0.12 0.00 0.00 0.00 0.00 12:37:57 03:40:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 03:50:08 lo 0.06 0.06 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 03:50:08 ens3 1.45 0.12 0.20 0.06 0.00 0.00 0.00 0.00 12:37:57 03:50:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 04:00:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 04:00:08 ens3 0.53 0.08 0.06 0.02 0.00 0.00 0.00 0.00 12:37:57 04:00:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 04:10:08 lo 0.05 0.05 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 04:10:08 ens3 0.38 0.07 0.05 0.02 0.00 0.00 0.00 0.00 12:37:57 04:10:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 04:20:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 04:20:08 ens3 0.61 0.22 0.19 0.13 0.00 0.00 0.00 0.00 12:37:57 04:20:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 04:30:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 04:30:08 ens3 0.47 0.16 0.11 0.07 0.00 0.00 0.00 0.00 12:37:57 04:30:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 04:40:08 lo 0.05 0.05 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 04:40:08 ens3 1.62 1.80 0.23 3.13 0.00 0.00 0.00 0.00 12:37:57 04:40:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 04:50:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 04:50:08 ens3 0.51 0.13 0.13 0.08 0.00 0.00 0.00 0.00 12:37:57 04:50:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 05:00:08 lo 0.05 0.05 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 05:00:08 ens3 0.31 0.10 0.07 0.04 0.00 0.00 0.00 0.00 12:37:57 05:00:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 05:10:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 05:10:08 ens3 0.26 0.07 0.04 0.02 0.00 0.00 0.00 0.00 12:37:57 05:10:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 05:20:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 05:20:08 ens3 0.36 0.14 0.09 0.06 0.00 0.00 0.00 0.00 12:37:57 05:20:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 05:30:08 lo 0.05 0.05 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 05:30:08 ens3 0.37 0.12 0.10 0.06 0.00 0.00 0.00 0.00 12:37:57 05:30:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 05:40:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 05:40:08 ens3 0.33 0.10 0.07 0.04 0.00 0.00 0.00 0.00 12:37:57 05:40:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 05:50:08 lo 0.05 0.05 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 05:50:08 ens3 6.48 4.61 4.34 3.03 0.00 0.00 0.00 0.00 12:37:57 05:50:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 06:00:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 06:00:08 ens3 0.24 0.07 0.04 0.02 0.00 0.00 0.00 0.00 12:37:57 06:00:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 06:10:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 06:10:08 ens3 0.32 0.12 0.07 0.04 0.00 0.00 0.00 0.00 12:37:57 06:10:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 06:20:08 lo 0.05 0.05 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 06:20:08 ens3 0.26 0.10 0.06 0.04 0.00 0.00 0.00 0.00 12:37:57 06:20:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 06:30:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 06:30:08 ens3 0.56 0.22 0.17 0.11 0.00 0.00 0.00 0.00 12:37:57 06:30:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 06:40:08 lo 0.06 0.06 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 06:40:08 ens3 0.34 0.11 0.07 0.04 0.00 0.00 0.00 0.00 12:37:57 06:40:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 06:50:08 lo 0.06 0.06 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 06:50:08 ens3 0.24 0.07 0.04 0.02 0.00 0.00 0.00 0.00 12:37:57 06:50:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 07:00:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 07:00:08 ens3 0.48 0.16 0.14 0.09 0.00 0.00 0.00 0.00 12:37:57 07:00:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 07:10:08 lo 0.05 0.05 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 07:10:08 ens3 0.65 0.15 0.14 0.08 0.00 0.00 0.00 0.00 12:37:57 07:10:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 07:20:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 07:20:08 ens3 0.36 0.13 0.09 0.06 0.00 0.00 0.00 0.00 12:37:57 07:20:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 07:30:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 07:30:08 ens3 0.40 0.15 0.10 0.06 0.00 0.00 0.00 0.00 12:37:57 07:30:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 07:40:08 lo 0.05 0.05 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 07:40:08 ens3 0.39 0.12 0.10 0.06 0.00 0.00 0.00 0.00 12:37:57 07:40:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 07:50:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 07:50:08 ens3 0.34 0.12 0.07 0.04 0.00 0.00 0.00 0.00 12:37:57 07:50:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 08:00:08 lo 0.05 0.05 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 08:00:08 ens3 0.23 0.07 0.04 0.02 0.00 0.00 0.00 0.00 12:37:57 08:00:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 08:10:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 08:10:08 ens3 0.34 0.10 0.07 0.04 0.00 0.00 0.00 0.00 12:37:57 08:10:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 08:20:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 08:20:08 ens3 0.31 0.10 0.07 0.04 0.00 0.00 0.00 0.00 12:37:57 08:20:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 08:30:08 lo 0.05 0.05 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 08:30:08 ens3 1.39 2.06 0.20 2.95 0.00 0.00 0.00 0.00 12:37:57 08:30:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 08:40:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 08:40:08 ens3 0.93 1.10 0.11 3.38 0.00 0.00 0.00 0.00 12:37:57 08:40:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 08:50:08 lo 0.05 0.05 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 08:50:08 ens3 0.28 0.17 0.04 0.29 0.00 0.00 0.00 0.00 12:37:57 08:50:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 09:00:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 09:00:08 ens3 0.34 0.11 0.07 0.04 0.00 0.00 0.00 0.00 12:37:57 09:00:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 09:10:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 09:10:08 ens3 0.41 0.14 0.10 0.06 0.00 0.00 0.00 0.00 12:37:57 09:10:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 09:20:08 lo 0.05 0.05 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 09:20:08 ens3 0.55 0.21 0.19 0.12 0.00 0.00 0.00 0.00 12:37:57 09:20:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 09:30:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 09:30:08 ens3 0.34 0.11 0.07 0.04 0.00 0.00 0.00 0.00 12:37:57 09:30:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 09:40:08 lo 0.07 0.07 0.01 0.01 0.00 0.00 0.00 0.00 12:37:57 09:40:08 ens3 0.47 0.17 0.15 0.08 0.00 0.00 0.00 0.00 12:37:57 09:40:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 09:50:08 lo 0.06 0.06 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 09:50:08 ens3 0.41 0.11 0.08 0.04 0.00 0.00 0.00 0.00 12:37:57 09:50:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 10:00:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 10:00:08 ens3 0.24 0.08 0.04 0.02 0.00 0.00 0.00 0.00 12:37:57 10:00:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 10:10:08 lo 0.05 0.05 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 10:10:08 ens3 0.22 0.07 0.04 0.02 0.00 0.00 0.00 0.00 12:37:57 10:10:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 10:20:07 lo 0.07 0.07 0.01 0.01 0.00 0.00 0.00 0.00 12:37:57 10:20:07 ens3 0.35 0.19 0.10 0.07 0.00 0.00 0.00 0.00 12:37:57 10:20:07 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 10:30:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 10:30:08 ens3 0.37 0.13 0.08 0.04 0.00 0.00 0.00 0.00 12:37:57 10:30:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 10:40:08 lo 0.05 0.05 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 10:40:08 ens3 0.45 0.13 0.11 0.06 0.00 0.00 0.00 0.00 12:37:57 10:40:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 10:50:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 10:50:08 ens3 0.28 0.11 0.07 0.04 0.00 0.00 0.00 0.00 12:37:57 10:50:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 11:00:08 lo 0.05 0.05 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 11:00:08 ens3 0.26 0.08 0.04 0.02 0.00 0.00 0.00 0.00 12:37:57 11:00:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 11:10:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 11:10:08 ens3 0.48 0.18 0.13 0.08 0.00 0.00 0.00 0.00 12:37:57 11:10:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 11:20:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 11:20:08 ens3 0.28 0.12 0.06 0.04 0.00 0.00 0.00 0.00 12:37:57 11:20:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 11:30:08 lo 0.05 0.05 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 11:30:08 ens3 0.30 0.10 0.07 0.04 0.00 0.00 0.00 0.00 12:37:57 11:30:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 11:40:08 lo 0.07 0.07 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 11:40:08 ens3 0.56 0.18 0.16 0.10 0.00 0.00 0.00 0.00 12:37:57 11:40:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 11:50:08 lo 1.01 1.01 0.10 0.10 0.00 0.00 0.00 0.00 12:37:57 11:50:08 ens3 119.42 91.39 1737.40 14.28 0.00 0.00 0.00 0.00 12:37:57 11:50:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 12:00:08 lo 12.59 12.59 10.73 10.73 0.00 0.00 0.00 0.00 12:37:57 12:00:08 ens3 7.06 6.04 1.63 3.78 0.00 0.00 0.00 0.00 12:37:57 12:00:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 12:10:08 lo 15.24 15.24 6.76 6.76 0.00 0.00 0.00 0.00 12:37:57 12:10:08 ens3 5.25 4.57 1.18 3.34 0.00 0.00 0.00 0.00 12:37:57 12:10:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 12:20:08 lo 22.29 22.29 7.95 7.95 0.00 0.00 0.00 0.00 12:37:57 12:20:08 ens3 0.68 0.57 0.14 0.12 0.00 0.00 0.00 0.00 12:37:57 12:20:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 12:30:08 lo 22.23 22.23 10.93 10.93 0.00 0.00 0.00 0.00 12:37:57 12:30:08 ens3 0.86 0.72 0.22 0.18 0.00 0.00 0.00 0.00 12:37:57 12:30:08 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 Average: lo 1.04 1.04 0.49 0.49 0.00 0.00 0.00 0.00 12:37:57 Average: ens3 2.41 1.62 23.37 0.51 0.00 0.00 0.00 0.00 12:37:57 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:37:57 12:37:57 12:37:57 ---> sar -P ALL: 12:37:57 Linux 5.15.0-131-generic (prd-ubuntu2204-docker-4c-16g-39075) 07/04/25 _x86_64_ (4 CPU) 12:37:57 12:37:57 00:00:08 CPU %user %nice %system %iowait %steal %idle 12:37:57 00:10:08 all 0.08 0.01 0.02 0.01 0.01 99.86 12:37:57 00:10:08 0 0.01 0.02 0.02 0.01 0.01 99.93 12:37:57 00:10:08 1 0.28 0.00 0.01 0.00 0.01 99.70 12:37:57 00:10:08 2 0.03 0.00 0.03 0.01 0.01 99.91 12:37:57 00:10:08 3 0.02 0.00 0.03 0.04 0.01 99.91 12:37:57 00:20:08 all 0.21 0.00 0.02 0.01 0.01 99.76 12:37:57 00:20:08 0 0.01 0.00 0.02 0.01 0.01 99.96 12:37:57 00:20:08 1 0.77 0.00 0.01 0.00 0.00 99.21 12:37:57 00:20:08 2 0.02 0.00 0.02 0.01 0.01 99.95 12:37:57 00:20:08 3 0.03 0.00 0.03 0.03 0.02 99.89 12:37:57 00:30:08 all 0.03 0.00 0.02 0.01 0.01 99.93 12:37:57 00:30:08 0 0.01 0.00 0.01 0.02 0.00 99.96 12:37:57 00:30:08 1 0.05 0.00 0.01 0.00 0.01 99.93 12:37:57 00:30:08 2 0.03 0.00 0.02 0.01 0.01 99.93 12:37:57 00:30:08 3 0.02 0.00 0.03 0.01 0.02 99.92 12:37:57 00:40:08 all 0.16 0.00 0.02 0.26 0.01 99.55 12:37:57 00:40:08 0 0.01 0.00 0.01 0.74 0.00 99.24 12:37:57 00:40:08 1 0.18 0.00 0.03 0.00 0.01 99.78 12:37:57 00:40:08 2 0.44 0.00 0.02 0.01 0.01 99.52 12:37:57 00:40:08 3 0.03 0.00 0.03 0.29 0.01 99.64 12:37:57 00:50:08 all 0.15 0.00 0.03 0.01 0.01 99.80 12:37:57 00:50:08 0 0.13 0.00 0.02 0.02 0.01 99.82 12:37:57 00:50:08 1 0.07 0.00 0.03 0.00 0.01 99.89 12:37:57 00:50:08 2 0.15 0.00 0.04 0.01 0.01 99.80 12:37:57 00:50:08 3 0.24 0.00 0.03 0.02 0.01 99.69 12:37:57 01:00:08 all 0.11 0.00 0.02 0.01 0.01 99.85 12:37:57 01:00:08 0 0.02 0.00 0.03 0.00 0.01 99.94 12:37:57 01:00:08 1 0.36 0.00 0.03 0.00 0.01 99.60 12:37:57 01:00:08 2 0.02 0.00 0.03 0.01 0.01 99.93 12:37:57 01:00:08 3 0.02 0.00 0.02 0.03 0.01 99.94 12:37:57 01:10:08 all 0.16 0.00 0.02 0.23 0.01 99.58 12:37:57 01:10:08 0 0.02 0.00 0.03 0.01 0.01 99.94 12:37:57 01:10:08 1 0.58 0.00 0.01 0.00 0.01 99.40 12:37:57 01:10:08 2 0.03 0.00 0.02 0.02 0.01 99.92 12:37:57 01:10:08 3 0.01 0.00 0.02 0.91 0.01 99.05 12:37:57 01:20:08 all 0.14 0.00 0.02 0.01 0.01 99.82 12:37:57 01:20:08 0 0.02 0.00 0.02 0.01 0.01 99.95 12:37:57 01:20:08 1 0.03 0.00 0.04 0.01 0.01 99.91 12:37:57 01:20:08 2 0.01 0.00 0.02 0.01 0.01 99.95 12:37:57 01:20:08 3 0.48 0.00 0.01 0.00 0.01 99.49 12:37:57 01:30:08 all 0.20 0.00 0.02 0.01 0.01 99.76 12:37:57 01:30:08 0 0.01 0.00 0.02 0.02 0.01 99.94 12:37:57 01:30:08 1 0.55 0.00 0.04 0.00 0.01 99.40 12:37:57 01:30:08 2 0.02 0.00 0.03 0.01 0.02 99.93 12:37:57 01:30:08 3 0.22 0.00 0.02 0.00 0.01 99.75 12:37:57 01:40:01 all 0.14 0.00 0.02 0.08 0.01 99.75 12:37:57 01:40:01 0 0.03 0.00 0.03 0.22 0.01 99.70 12:37:57 01:40:01 1 0.50 0.00 0.02 0.00 0.01 99.47 12:37:57 01:40:01 2 0.02 0.00 0.02 0.01 0.01 99.95 12:37:57 01:40:01 3 0.02 0.00 0.02 0.09 0.01 99.87 12:37:57 01:50:08 all 0.09 0.00 0.02 0.01 0.01 99.87 12:37:57 01:50:08 0 0.02 0.00 0.03 0.00 0.01 99.93 12:37:57 01:50:08 1 0.29 0.00 0.02 0.00 0.01 99.69 12:37:57 01:50:08 2 0.02 0.00 0.02 0.01 0.01 99.94 12:37:57 01:50:08 3 0.01 0.00 0.02 0.02 0.01 99.93 12:37:57 12:37:57 01:50:08 CPU %user %nice %system %iowait %steal %idle 12:37:57 02:00:08 all 0.04 0.00 0.02 0.01 0.01 99.92 12:37:57 02:00:08 0 0.04 0.00 0.04 0.02 0.01 99.89 12:37:57 02:00:08 1 0.09 0.00 0.01 0.00 0.01 99.89 12:37:57 02:00:08 2 0.01 0.00 0.02 0.02 0.01 99.94 12:37:57 02:00:08 3 0.01 0.00 0.01 0.01 0.01 99.96 12:37:57 02:10:08 all 0.13 0.00 0.02 0.01 0.01 99.83 12:37:57 02:10:08 0 0.02 0.00 0.04 0.01 0.01 99.92 12:37:57 02:10:08 1 0.49 0.00 0.01 0.00 0.01 99.49 12:37:57 02:10:08 2 0.01 0.00 0.02 0.02 0.01 99.95 12:37:57 02:10:08 3 0.02 0.00 0.01 0.00 0.00 99.96 12:37:57 02:20:08 all 0.14 0.00 0.02 0.01 0.01 99.82 12:37:57 02:20:08 0 0.02 0.00 0.03 0.03 0.01 99.92 12:37:57 02:20:08 1 0.17 0.00 0.01 0.00 0.00 99.81 12:37:57 02:20:08 2 0.02 0.00 0.03 0.01 0.01 99.92 12:37:57 02:20:08 3 0.35 0.00 0.01 0.01 0.00 99.62 12:37:57 02:30:08 all 0.15 0.00 0.02 0.01 0.01 99.82 12:37:57 02:30:08 0 0.52 0.00 0.01 0.00 0.00 99.45 12:37:57 02:30:08 1 0.02 0.00 0.03 0.01 0.01 99.94 12:37:57 02:30:08 2 0.02 0.00 0.02 0.01 0.01 99.94 12:37:57 02:30:08 3 0.02 0.00 0.02 0.02 0.01 99.93 12:37:57 02:40:08 all 0.03 0.00 0.02 0.01 0.01 99.94 12:37:57 02:40:08 0 0.05 0.00 0.01 0.02 0.01 99.92 12:37:57 02:40:08 1 0.02 0.00 0.03 0.00 0.01 99.94 12:37:57 02:40:08 2 0.03 0.00 0.03 0.01 0.02 99.92 12:37:57 02:40:08 3 0.02 0.00 0.01 0.00 0.00 99.97 12:37:57 02:50:08 all 0.17 0.00 0.02 0.01 0.01 99.79 12:37:57 02:50:08 0 0.58 0.00 0.02 0.02 0.01 99.38 12:37:57 02:50:08 1 0.07 0.00 0.02 0.00 0.01 99.90 12:37:57 02:50:08 2 0.02 0.00 0.03 0.01 0.01 99.94 12:37:57 02:50:08 3 0.02 0.00 0.02 0.00 0.01 99.96 12:37:57 03:00:08 all 0.02 0.00 0.02 0.03 0.01 99.92 12:37:57 03:00:08 0 0.03 0.00 0.02 0.02 0.01 99.93 12:37:57 03:00:08 1 0.02 0.00 0.03 0.07 0.01 99.87 12:37:57 03:00:08 2 0.02 0.00 0.02 0.00 0.01 99.95 12:37:57 03:00:08 3 0.01 0.00 0.02 0.04 0.01 99.93 12:37:57 03:10:08 all 0.02 0.00 0.02 0.01 0.01 99.94 12:37:57 03:10:08 0 0.02 0.00 0.02 0.02 0.01 99.93 12:37:57 03:10:08 1 0.01 0.00 0.02 0.00 0.00 99.96 12:37:57 03:10:08 2 0.02 0.00 0.02 0.01 0.01 99.94 12:37:57 03:10:08 3 0.02 0.00 0.03 0.01 0.02 99.94 12:37:57 03:20:08 all 0.24 0.00 0.02 0.01 0.01 99.72 12:37:57 03:20:08 0 0.01 0.00 0.02 0.02 0.01 99.94 12:37:57 03:20:08 1 0.07 0.00 0.03 0.01 0.01 99.89 12:37:57 03:20:08 2 0.84 0.00 0.02 0.01 0.01 99.12 12:37:57 03:20:08 3 0.02 0.00 0.02 0.00 0.01 99.95 12:37:57 03:30:08 all 0.22 0.00 0.02 0.01 0.01 99.75 12:37:57 03:30:08 0 0.01 0.00 0.02 0.02 0.01 99.94 12:37:57 03:30:08 1 0.82 0.00 0.02 0.00 0.01 99.15 12:37:57 03:30:08 2 0.03 0.00 0.03 0.01 0.01 99.93 12:37:57 03:30:08 3 0.01 0.00 0.01 0.00 0.01 99.97 12:37:57 03:40:08 all 0.12 0.00 0.02 0.02 0.01 99.83 12:37:57 03:40:08 0 0.01 0.00 0.03 0.04 0.01 99.91 12:37:57 03:40:08 1 0.44 0.00 0.02 0.00 0.01 99.52 12:37:57 03:40:08 2 0.02 0.00 0.03 0.00 0.01 99.95 12:37:57 03:40:08 3 0.01 0.00 0.02 0.02 0.01 99.95 12:37:57 12:37:57 03:40:08 CPU %user %nice %system %iowait %steal %idle 12:37:57 03:50:08 all 0.26 0.00 0.02 0.01 0.01 99.70 12:37:57 03:50:08 0 0.02 0.00 0.03 0.02 0.01 99.91 12:37:57 03:50:08 1 0.59 0.00 0.02 0.00 0.01 99.39 12:37:57 03:50:08 2 0.43 0.00 0.02 0.01 0.01 99.53 12:37:57 03:50:08 3 0.01 0.00 0.00 0.00 0.00 99.99 12:37:57 04:00:08 all 0.02 0.00 0.02 0.01 0.01 99.94 12:37:57 04:00:08 0 0.02 0.00 0.03 0.01 0.01 99.93 12:37:57 04:00:08 1 0.02 0.00 0.02 0.02 0.01 99.93 12:37:57 04:00:08 2 0.01 0.00 0.02 0.01 0.01 99.95 12:37:57 04:00:08 3 0.01 0.00 0.01 0.00 0.01 99.97 12:37:57 04:10:08 all 0.08 0.00 0.02 0.01 0.01 99.88 12:37:57 04:10:08 0 0.03 0.00 0.03 0.01 0.01 99.93 12:37:57 04:10:08 1 0.02 0.00 0.02 0.03 0.01 99.93 12:37:57 04:10:08 2 0.24 0.00 0.03 0.01 0.01 99.71 12:37:57 04:10:08 3 0.02 0.00 0.02 0.00 0.01 99.95 12:37:57 04:20:08 all 0.02 0.00 0.03 0.01 0.01 99.93 12:37:57 04:20:08 0 0.02 0.00 0.02 0.00 0.00 99.96 12:37:57 04:20:08 1 0.02 0.00 0.02 0.03 0.01 99.92 12:37:57 04:20:08 2 0.03 0.00 0.02 0.01 0.01 99.94 12:37:57 04:20:08 3 0.03 0.00 0.04 0.02 0.01 99.91 12:37:57 04:30:08 all 0.10 0.00 0.02 0.01 0.01 99.86 12:37:57 04:30:08 0 0.02 0.00 0.01 0.00 0.00 99.97 12:37:57 04:30:08 1 0.01 0.00 0.02 0.02 0.01 99.95 12:37:57 04:30:08 2 0.02 0.00 0.02 0.01 0.01 99.94 12:37:57 04:30:08 3 0.36 0.00 0.02 0.01 0.01 99.60 12:37:57 04:40:08 all 0.04 0.00 0.02 0.01 0.01 99.92 12:37:57 04:40:08 0 0.02 0.00 0.02 0.01 0.01 99.96 12:37:57 04:40:08 1 0.12 0.00 0.02 0.01 0.01 99.84 12:37:57 04:40:08 2 0.02 0.00 0.03 0.02 0.01 99.92 12:37:57 04:40:08 3 0.02 0.00 0.02 0.00 0.01 99.95 12:37:57 04:50:08 all 0.30 0.00 0.02 0.01 0.01 99.65 12:37:57 04:50:08 0 0.02 0.00 0.02 0.02 0.01 99.93 12:37:57 04:50:08 1 0.51 0.00 0.02 0.00 0.01 99.44 12:37:57 04:50:08 2 0.64 0.00 0.02 0.01 0.01 99.32 12:37:57 04:50:08 3 0.02 0.00 0.03 0.00 0.01 99.94 12:37:57 05:00:08 all 0.09 0.00 0.02 0.01 0.01 99.87 12:37:57 05:00:08 0 0.10 0.00 0.01 0.02 0.00 99.87 12:37:57 05:00:08 1 0.19 0.00 0.02 0.00 0.01 99.77 12:37:57 05:00:08 2 0.02 0.00 0.02 0.01 0.01 99.95 12:37:57 05:00:08 3 0.03 0.00 0.03 0.01 0.02 99.91 12:37:57 05:10:08 all 0.37 0.00 0.02 0.01 0.01 99.58 12:37:57 05:10:08 0 0.66 0.00 0.02 0.01 0.01 99.30 12:37:57 05:10:08 1 0.01 0.00 0.02 0.00 0.01 99.95 12:37:57 05:10:08 2 0.31 0.00 0.02 0.01 0.01 99.65 12:37:57 05:10:08 3 0.51 0.00 0.03 0.02 0.01 99.44 12:37:57 05:20:08 all 0.28 0.00 0.02 0.01 0.01 99.68 12:37:57 05:20:08 0 0.06 0.00 0.04 0.01 0.02 99.88 12:37:57 05:20:08 1 0.37 0.00 0.02 0.00 0.00 99.60 12:37:57 05:20:08 2 0.67 0.00 0.02 0.00 0.01 99.30 12:37:57 05:20:08 3 0.02 0.00 0.02 0.03 0.00 99.93 12:37:57 05:30:08 all 0.36 0.00 0.02 0.01 0.01 99.59 12:37:57 05:30:08 0 0.77 0.00 0.03 0.01 0.01 99.17 12:37:57 05:30:08 1 0.01 0.00 0.02 0.00 0.01 99.97 12:37:57 05:30:08 2 0.59 0.00 0.02 0.01 0.01 99.36 12:37:57 05:30:08 3 0.07 0.00 0.03 0.02 0.01 99.87 12:37:57 12:37:57 05:30:08 CPU %user %nice %system %iowait %steal %idle 12:37:57 05:40:08 all 0.04 0.00 0.02 0.01 0.01 99.92 12:37:57 05:40:08 0 0.01 0.00 0.02 0.01 0.00 99.96 12:37:57 05:40:08 1 0.10 0.00 0.02 0.00 0.01 99.88 12:37:57 05:40:08 2 0.04 0.00 0.03 0.01 0.02 99.91 12:37:57 05:40:08 3 0.02 0.00 0.02 0.02 0.01 99.93 12:37:57 05:50:08 all 0.17 0.00 0.03 0.01 0.01 99.78 12:37:57 05:50:08 0 0.19 0.00 0.04 0.03 0.01 99.73 12:37:57 05:50:08 1 0.46 0.00 0.02 0.00 0.01 99.51 12:37:57 05:50:08 2 0.02 0.00 0.02 0.01 0.01 99.94 12:37:57 05:50:08 3 0.03 0.00 0.03 0.01 0.01 99.93 12:37:57 06:00:08 all 0.19 0.00 0.02 0.01 0.01 99.77 12:37:57 06:00:08 0 0.01 0.00 0.02 0.03 0.00 99.93 12:37:57 06:00:08 1 0.66 0.00 0.03 0.00 0.01 99.30 12:37:57 06:00:08 2 0.09 0.00 0.03 0.01 0.01 99.87 12:37:57 06:00:08 3 0.01 0.00 0.01 0.00 0.01 99.97 12:37:57 06:10:08 all 0.13 0.00 0.02 0.01 0.01 99.83 12:37:57 06:10:08 0 0.01 0.00 0.02 0.02 0.01 99.94 12:37:57 06:10:08 1 0.48 0.00 0.04 0.00 0.01 99.46 12:37:57 06:10:08 2 0.02 0.00 0.01 0.01 0.01 99.95 12:37:57 06:10:08 3 0.02 0.00 0.02 0.00 0.00 99.96 12:37:57 06:20:08 all 0.13 0.00 0.02 0.01 0.01 99.83 12:37:57 06:20:08 0 0.01 0.00 0.02 0.02 0.01 99.95 12:37:57 06:20:08 1 0.12 0.00 0.03 0.01 0.01 99.84 12:37:57 06:20:08 2 0.37 0.00 0.03 0.01 0.01 99.59 12:37:57 06:20:08 3 0.02 0.00 0.01 0.01 0.01 99.96 12:37:57 06:30:08 all 0.02 0.00 0.02 0.01 0.01 99.94 12:37:57 06:30:08 0 0.01 0.00 0.01 0.03 0.00 99.95 12:37:57 06:30:08 1 0.02 0.00 0.03 0.01 0.01 99.93 12:37:57 06:30:08 2 0.03 0.00 0.01 0.00 0.01 99.95 12:37:57 06:30:08 3 0.03 0.00 0.03 0.01 0.01 99.93 12:37:57 06:40:08 all 0.08 0.00 0.02 0.01 0.01 99.88 12:37:57 06:40:08 0 0.01 0.00 0.01 0.02 0.00 99.96 12:37:57 06:40:08 1 0.29 0.00 0.03 0.01 0.01 99.67 12:37:57 06:40:08 2 0.02 0.00 0.03 0.00 0.02 99.93 12:37:57 06:40:08 3 0.02 0.00 0.02 0.01 0.00 99.96 12:37:57 06:50:08 all 0.09 0.00 0.02 0.01 0.01 99.87 12:37:57 06:50:08 0 0.01 0.00 0.03 0.02 0.01 99.94 12:37:57 06:50:08 1 0.30 0.00 0.02 0.00 0.00 99.68 12:37:57 06:50:08 2 0.04 0.00 0.02 0.00 0.01 99.93 12:37:57 06:50:08 3 0.01 0.00 0.01 0.02 0.00 99.95 12:37:57 07:00:08 all 0.33 0.00 0.02 0.01 0.01 99.64 12:37:57 07:00:08 0 0.01 0.00 0.02 0.00 0.01 99.97 12:37:57 07:00:08 1 0.85 0.00 0.01 0.00 0.00 99.13 12:37:57 07:00:08 2 0.43 0.00 0.03 0.01 0.01 99.52 12:37:57 07:00:08 3 0.02 0.00 0.03 0.02 0.01 99.92 12:37:57 07:10:08 all 0.16 0.00 0.02 0.01 0.01 99.81 12:37:57 07:10:08 0 0.01 0.00 0.01 0.00 0.00 99.97 12:37:57 07:10:08 1 0.01 0.00 0.01 0.00 0.01 99.97 12:37:57 07:10:08 2 0.59 0.00 0.03 0.02 0.02 99.33 12:37:57 07:10:08 3 0.02 0.00 0.02 0.01 0.00 99.96 12:37:57 07:20:08 all 0.17 0.00 0.02 0.01 0.01 99.79 12:37:57 07:20:08 0 0.01 0.00 0.01 0.01 0.00 99.96 12:37:57 07:20:08 1 0.02 0.00 0.02 0.01 0.01 99.95 12:37:57 07:20:08 2 0.63 0.00 0.03 0.02 0.01 99.32 12:37:57 07:20:08 3 0.03 0.00 0.03 0.01 0.01 99.93 12:37:57 12:37:57 07:20:08 CPU %user %nice %system %iowait %steal %idle 12:37:57 07:30:08 all 0.16 0.00 0.02 0.01 0.01 99.80 12:37:57 07:30:08 0 0.01 0.00 0.01 0.02 0.00 99.95 12:37:57 07:30:08 1 0.02 0.00 0.02 0.01 0.01 99.95 12:37:57 07:30:08 2 0.58 0.00 0.05 0.01 0.01 99.35 12:37:57 07:30:08 3 0.02 0.00 0.02 0.00 0.00 99.96 12:37:57 07:40:08 all 0.12 0.00 0.02 0.01 0.01 99.84 12:37:57 07:40:08 0 0.01 0.00 0.01 0.02 0.00 99.96 12:37:57 07:40:08 1 0.02 0.00 0.02 0.00 0.01 99.95 12:37:57 07:40:08 2 0.44 0.00 0.04 0.01 0.02 99.50 12:37:57 07:40:08 3 0.02 0.00 0.02 0.00 0.01 99.96 12:37:57 07:50:08 all 0.19 0.00 0.02 0.01 0.01 99.77 12:37:57 07:50:08 0 0.28 0.00 0.01 0.01 0.01 99.69 12:37:57 07:50:08 1 0.02 0.00 0.02 0.02 0.01 99.94 12:37:57 07:50:08 2 0.45 0.00 0.03 0.01 0.01 99.49 12:37:57 07:50:08 3 0.02 0.00 0.02 0.00 0.01 99.96 12:37:57 08:00:08 all 0.07 0.00 0.02 0.01 0.01 99.88 12:37:57 08:00:08 0 0.23 0.00 0.01 0.01 0.03 99.73 12:37:57 08:00:08 1 0.01 0.00 0.02 0.01 0.00 99.96 12:37:57 08:00:08 2 0.04 0.00 0.05 0.01 0.02 99.88 12:37:57 08:00:08 3 0.01 0.00 0.01 0.01 0.00 99.97 12:37:57 08:10:08 all 0.14 0.00 0.03 0.01 0.01 99.82 12:37:57 08:10:08 0 0.29 0.00 0.03 0.00 0.01 99.68 12:37:57 08:10:08 1 0.01 0.00 0.01 0.00 0.01 99.96 12:37:57 08:10:08 2 0.24 0.00 0.05 0.01 0.01 99.68 12:37:57 08:10:08 3 0.01 0.00 0.01 0.03 0.01 99.95 12:37:57 08:20:08 all 0.29 0.00 0.02 0.01 0.01 99.67 12:37:57 08:20:08 0 1.11 0.00 0.03 0.00 0.01 98.85 12:37:57 08:20:08 1 0.02 0.00 0.03 0.00 0.01 99.95 12:37:57 08:20:08 2 0.02 0.00 0.02 0.01 0.01 99.94 12:37:57 08:20:08 3 0.01 0.00 0.01 0.03 0.00 99.94 12:37:57 08:30:08 all 0.29 0.00 0.02 0.01 0.01 99.67 12:37:57 08:30:08 0 1.09 0.00 0.02 0.00 0.00 98.88 12:37:57 08:30:08 1 0.02 0.00 0.01 0.00 0.00 99.96 12:37:57 08:30:08 2 0.02 0.00 0.03 0.01 0.01 99.93 12:37:57 08:30:08 3 0.03 0.00 0.03 0.02 0.01 99.91 12:37:57 08:40:08 all 0.01 0.00 0.02 0.22 0.12 99.63 12:37:57 08:40:08 0 0.02 0.00 0.01 0.01 0.14 99.82 12:37:57 08:40:08 1 0.02 0.00 0.02 0.84 0.02 99.10 12:37:57 08:40:08 2 0.01 0.00 0.02 0.01 0.16 99.80 12:37:57 08:40:08 3 0.01 0.00 0.03 0.01 0.15 99.80 12:37:57 08:50:08 all 0.24 0.00 0.02 0.01 0.01 99.72 12:37:57 08:50:08 0 0.02 0.00 0.01 0.00 0.00 99.97 12:37:57 08:50:08 1 0.67 0.00 0.02 0.01 0.00 99.29 12:37:57 08:50:08 2 0.02 0.00 0.03 0.02 0.01 99.92 12:37:57 08:50:08 3 0.27 0.00 0.02 0.00 0.01 99.70 12:37:57 09:00:08 all 0.08 0.00 0.02 0.01 0.01 99.88 12:37:57 09:00:08 0 0.01 0.00 0.01 0.00 0.00 99.98 12:37:57 09:00:08 1 0.16 0.00 0.03 0.00 0.01 99.80 12:37:57 09:00:08 2 0.14 0.00 0.02 0.03 0.01 99.80 12:37:57 09:00:08 3 0.02 0.00 0.02 0.01 0.01 99.95 12:37:57 09:10:08 all 0.24 0.00 0.02 0.06 0.01 99.67 12:37:57 09:10:08 0 0.01 0.00 0.02 0.01 0.01 99.96 12:37:57 09:10:08 1 0.05 0.00 0.03 0.01 0.01 99.90 12:37:57 09:10:08 2 0.88 0.00 0.02 0.21 0.01 98.87 12:37:57 09:10:08 3 0.01 0.00 0.03 0.00 0.01 99.95 12:37:57 12:37:57 09:10:08 CPU %user %nice %system %iowait %steal %idle 12:37:57 09:20:08 all 0.20 0.00 0.08 0.01 0.08 99.64 12:37:57 09:20:08 0 0.74 0.00 0.03 0.00 0.02 99.21 12:37:57 09:20:08 1 0.02 0.00 0.03 0.01 0.03 99.91 12:37:57 09:20:08 2 0.02 0.00 0.22 0.01 0.03 99.71 12:37:57 09:20:08 3 0.02 0.00 0.03 0.01 0.22 99.72 12:37:57 09:30:08 all 0.01 0.00 0.02 0.01 0.01 99.95 12:37:57 09:30:08 0 0.01 0.00 0.02 0.00 0.00 99.96 12:37:57 09:30:08 1 0.01 0.00 0.03 0.03 0.01 99.93 12:37:57 09:30:08 2 0.01 0.00 0.01 0.00 0.01 99.98 12:37:57 09:30:08 3 0.02 0.00 0.03 0.00 0.01 99.94 12:37:57 09:40:08 all 0.15 0.00 0.02 0.01 0.01 99.81 12:37:57 09:40:08 0 0.03 0.00 0.02 0.01 0.01 99.94 12:37:57 09:40:08 1 0.02 0.00 0.03 0.01 0.01 99.93 12:37:57 09:40:08 2 0.01 0.00 0.01 0.01 0.01 99.97 12:37:57 09:40:08 3 0.54 0.00 0.03 0.01 0.01 99.42 12:37:57 09:50:08 all 0.14 0.00 0.02 0.01 0.01 99.82 12:37:57 09:50:08 0 0.02 0.00 0.02 0.01 0.01 99.95 12:37:57 09:50:08 1 0.02 0.00 0.02 0.03 0.01 99.93 12:37:57 09:50:08 2 0.02 0.00 0.03 0.01 0.01 99.94 12:37:57 09:50:08 3 0.50 0.00 0.02 0.00 0.01 99.47 12:37:57 10:00:08 all 0.20 0.00 0.03 0.01 0.01 99.75 12:37:57 10:00:08 0 0.01 0.00 0.02 0.01 0.00 99.96 12:37:57 10:00:08 1 0.02 0.00 0.03 0.02 0.01 99.93 12:37:57 10:00:08 2 0.02 0.00 0.02 0.00 0.01 99.96 12:37:57 10:00:08 3 0.76 0.00 0.05 0.01 0.01 99.18 12:37:57 10:10:08 all 0.04 0.00 0.02 0.01 0.02 99.92 12:37:57 10:10:08 0 0.01 0.00 0.01 0.00 0.01 99.97 12:37:57 10:10:08 1 0.03 0.00 0.04 0.01 0.03 99.90 12:37:57 10:10:08 2 0.09 0.00 0.02 0.01 0.02 99.87 12:37:57 10:10:08 3 0.02 0.00 0.01 0.02 0.02 99.93 12:37:57 10:20:07 all 0.16 0.00 0.03 0.01 0.01 99.80 12:37:57 10:20:07 0 0.02 0.00 0.03 0.01 0.01 99.94 12:37:57 10:20:07 1 0.55 0.00 0.02 0.02 0.01 99.41 12:37:57 10:20:07 2 0.06 0.00 0.03 0.02 0.01 99.88 12:37:57 10:20:07 3 0.02 0.00 0.02 0.00 0.00 99.96 12:37:57 10:30:08 all 0.08 0.00 0.02 0.01 0.01 99.88 12:37:57 10:30:08 0 0.04 0.00 0.02 0.00 0.00 99.94 12:37:57 10:30:08 1 0.27 0.00 0.03 0.01 0.01 99.68 12:37:57 10:30:08 2 0.01 0.00 0.01 0.03 0.01 99.94 12:37:57 10:30:08 3 0.01 0.00 0.02 0.00 0.00 99.97 12:37:57 10:40:08 all 0.39 0.00 0.02 0.01 0.01 99.57 12:37:57 10:40:08 0 0.64 0.00 0.02 0.00 0.00 99.33 12:37:57 10:40:08 1 0.01 0.00 0.02 0.02 0.01 99.94 12:37:57 10:40:08 2 0.90 0.00 0.02 0.01 0.01 99.06 12:37:57 10:40:08 3 0.02 0.00 0.02 0.00 0.01 99.95 12:37:57 10:50:08 all 0.09 0.00 0.02 0.01 0.01 99.87 12:37:57 10:50:08 0 0.01 0.00 0.01 0.01 0.00 99.97 12:37:57 10:50:08 1 0.11 0.00 0.02 0.02 0.01 99.85 12:37:57 10:50:08 2 0.23 0.00 0.03 0.02 0.01 99.71 12:37:57 10:50:08 3 0.02 0.00 0.01 0.00 0.00 99.96 12:37:57 11:00:08 all 0.18 0.00 0.02 0.01 0.01 99.78 12:37:57 11:00:08 0 0.62 0.00 0.02 0.00 0.00 99.36 12:37:57 11:00:08 1 0.08 0.00 0.03 0.01 0.01 99.88 12:37:57 11:00:08 2 0.02 0.00 0.02 0.02 0.01 99.93 12:37:57 11:00:08 3 0.01 0.00 0.02 0.01 0.00 99.97 12:37:57 12:37:57 11:00:08 CPU %user %nice %system %iowait %steal %idle 12:37:57 11:10:08 all 0.18 0.00 0.05 0.16 0.01 99.59 12:37:57 11:10:08 0 0.15 0.00 0.13 0.29 0.01 99.43 12:37:57 11:10:08 1 0.04 0.00 0.03 0.00 0.01 99.93 12:37:57 11:10:08 2 0.53 0.00 0.04 0.34 0.02 99.08 12:37:57 11:10:08 3 0.02 0.00 0.02 0.01 0.02 99.95 12:37:57 11:20:08 all 0.17 0.00 0.02 0.26 0.28 99.26 12:37:57 11:20:08 0 0.02 0.00 0.03 0.00 0.17 99.78 12:37:57 11:20:08 1 0.62 0.00 0.02 1.02 0.38 97.95 12:37:57 11:20:08 2 0.02 0.00 0.02 0.03 0.32 99.61 12:37:57 11:20:08 3 0.01 0.00 0.02 0.00 0.26 99.71 12:37:57 11:30:08 all 0.11 0.00 0.02 0.01 0.01 99.86 12:37:57 11:30:08 0 0.03 0.00 0.03 0.00 0.01 99.92 12:37:57 11:30:08 1 0.10 0.00 0.02 0.00 0.01 99.87 12:37:57 11:30:08 2 0.28 0.00 0.01 0.01 0.01 99.69 12:37:57 11:30:08 3 0.01 0.00 0.02 0.02 0.01 99.94 12:37:57 11:40:08 all 0.10 0.00 0.02 0.01 0.01 99.86 12:37:57 11:40:08 0 0.01 0.00 0.02 0.01 0.01 99.96 12:37:57 11:40:08 1 0.02 0.00 0.02 0.02 0.01 99.94 12:37:57 11:40:08 2 0.37 0.00 0.04 0.00 0.02 99.57 12:37:57 11:40:08 3 0.01 0.00 0.01 0.02 0.00 99.96 12:37:57 11:50:08 all 30.76 0.00 1.93 0.46 0.06 66.79 12:37:57 11:50:08 0 29.52 0.00 1.66 0.23 0.06 68.53 12:37:57 11:50:08 1 32.74 0.00 2.04 0.49 0.06 64.68 12:37:57 11:50:08 2 31.84 0.00 2.18 0.75 0.06 65.16 12:37:57 11:50:08 3 28.94 0.00 1.86 0.37 0.05 68.78 12:37:57 12:00:08 all 36.93 0.00 1.32 0.16 0.08 61.52 12:37:57 12:00:08 0 35.64 0.00 1.33 0.14 0.09 62.81 12:37:57 12:00:08 1 36.55 0.00 1.41 0.19 0.08 61.78 12:37:57 12:00:08 2 36.75 0.00 1.27 0.17 0.08 61.73 12:37:57 12:00:08 3 38.78 0.00 1.26 0.13 0.08 59.75 12:37:57 12:10:08 all 25.71 0.00 0.98 0.09 0.07 73.16 12:37:57 12:10:08 0 25.33 0.00 0.88 0.16 0.06 73.56 12:37:57 12:10:08 1 25.07 0.00 1.09 0.13 0.07 73.64 12:37:57 12:10:08 2 25.96 0.00 1.05 0.04 0.07 72.88 12:37:57 12:10:08 3 26.48 0.00 0.89 0.01 0.06 72.57 12:37:57 12:20:08 all 13.20 0.00 0.66 0.03 0.06 86.04 12:37:57 12:20:08 0 12.90 0.00 0.68 0.02 0.05 86.35 12:37:57 12:20:08 1 13.21 0.00 0.64 0.07 0.06 86.04 12:37:57 12:20:08 2 13.37 0.00 0.60 0.03 0.06 85.94 12:37:57 12:20:08 3 13.34 0.00 0.74 0.01 0.06 85.85 12:37:57 12:30:08 all 14.31 0.00 0.67 0.06 0.05 84.90 12:37:57 12:30:08 0 14.81 0.00 0.81 0.10 0.06 84.23 12:37:57 12:30:08 1 14.18 0.00 0.64 0.02 0.06 85.11 12:37:57 12:30:08 2 13.79 0.00 0.65 0.05 0.05 85.46 12:37:57 12:30:08 3 14.46 0.00 0.59 0.08 0.05 84.82 12:37:57 Average: all 1.74 0.00 0.10 0.04 0.02 98.11 12:37:57 Average: 0 1.69 0.00 0.09 0.04 0.01 98.17 12:37:57 Average: 1 1.81 0.00 0.10 0.04 0.02 98.03 12:37:57 Average: 2 1.79 0.00 0.10 0.03 0.02 98.06 12:37:57 Average: 3 1.69 0.00 0.09 0.03 0.02 98.16 12:37:57 12:37:57 12:37:57