06:11:50 Triggered by Gerrit: https://git.opendaylight.org/gerrit/c/transportpce/+/116679 06:11:50 Running as SYSTEM 06:11:50 [EnvInject] - Loading node environment variables. 06:11:50 Building remotely on prd-ubuntu2204-docker-4c-16g-38734 (ubuntu2204-docker-4c-16g) in workspace /w/workspace/transportpce-tox-verify-transportpce-master 06:11:50 [ssh-agent] Looking for ssh-agent implementation... 06:11:50 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 06:11:50 $ ssh-agent 06:11:50 SSH_AUTH_SOCK=/tmp/ssh-XXXXXX9i0kq0/agent.1561 06:11:50 SSH_AGENT_PID=1563 06:11:50 [ssh-agent] Started. 06:11:50 Running ssh-add (command line suppressed) 06:11:50 Identity added: /w/workspace/transportpce-tox-verify-transportpce-master@tmp/private_key_9702592141988911233.key (/w/workspace/transportpce-tox-verify-transportpce-master@tmp/private_key_9702592141988911233.key) 06:11:50 [ssh-agent] Using credentials jenkins (jenkins-ssh) 06:11:50 The recommended git tool is: NONE 06:11:52 using credential jenkins-ssh 06:11:52 Wiping out workspace first. 06:11:52 Cloning the remote Git repository 06:11:52 Cloning repository git://devvexx.opendaylight.org/mirror/transportpce 06:11:52 > git init /w/workspace/transportpce-tox-verify-transportpce-master # timeout=10 06:11:52 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/transportpce 06:11:52 > git --version # timeout=10 06:11:52 > git --version # 'git version 2.34.1' 06:11:52 using GIT_SSH to set credentials jenkins-ssh 06:11:52 Verifying host key using known hosts file 06:11:52 You're using 'Known hosts file' strategy to verify ssh host keys, but your known_hosts file does not exist, please go to 'Manage Jenkins' -> 'Security' -> 'Git Host Key Verification Configuration' and configure host key verification. 06:11:52 > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/transportpce +refs/heads/*:refs/remotes/origin/* # timeout=10 06:11:57 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/transportpce # timeout=10 06:11:57 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 06:11:57 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/transportpce # timeout=10 06:11:57 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/transportpce 06:11:57 using GIT_SSH to set credentials jenkins-ssh 06:11:57 Verifying host key using known hosts file 06:11:57 You're using 'Known hosts file' strategy to verify ssh host keys, but your known_hosts file does not exist, please go to 'Manage Jenkins' -> 'Security' -> 'Git Host Key Verification Configuration' and configure host key verification. 06:11:57 > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/transportpce refs/changes/79/116679/10 # timeout=10 06:11:57 > git rev-parse 5aeebea654bcbd9995f1cf028d62ca09ceea44b5^{commit} # timeout=10 06:11:57 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script 06:11:57 Checking out Revision 5aeebea654bcbd9995f1cf028d62ca09ceea44b5 (refs/changes/79/116679/10) 06:11:58 > git config core.sparsecheckout # timeout=10 06:11:58 > git checkout -f 5aeebea654bcbd9995f1cf028d62ca09ceea44b5 # timeout=10 06:12:01 Commit message: "TAPI converting SRG PP used wave length" 06:12:01 > git rev-parse FETCH_HEAD^{commit} # timeout=10 06:12:01 > git rev-list --no-walk 47aea407246dc2de93093d8bf77a3182d178ea26 # timeout=10 06:12:01 > git remote # timeout=10 06:12:01 > git submodule init # timeout=10 06:12:01 > git submodule sync # timeout=10 06:12:01 > git config --get remote.origin.url # timeout=10 06:12:01 > git submodule init # timeout=10 06:12:01 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 06:12:01 ERROR: No submodules found. 06:12:01 provisioning config files... 06:12:01 copy managed file [npmrc] to file:/home/jenkins/.npmrc 06:12:01 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 06:12:01 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins2630449940582640050.sh 06:12:01 ---> python-tools-install.sh 06:12:01 Setup pyenv: 06:12:01 * system (set by /opt/pyenv/version) 06:12:01 * 3.8.20 (set by /opt/pyenv/version) 06:12:01 * 3.9.20 (set by /opt/pyenv/version) 06:12:01 * 3.10.15 (set by /opt/pyenv/version) 06:12:02 * 3.11.10 (set by /opt/pyenv/version) 06:12:06 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-M8wM 06:12:06 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 06:12:10 lf-activate-venv(): INFO: Installing: lftools 06:12:35 lf-activate-venv(): INFO: Adding /tmp/venv-M8wM/bin to PATH 06:12:35 Generating Requirements File 06:12:57 Python 3.11.10 06:12:57 pip 25.1.1 from /tmp/venv-M8wM/lib/python3.11/site-packages/pip (python 3.11) 06:12:57 appdirs==1.4.4 06:12:57 argcomplete==3.6.2 06:12:57 aspy.yaml==1.3.0 06:12:57 attrs==25.3.0 06:12:57 autopage==0.5.2 06:12:57 beautifulsoup4==4.13.4 06:12:57 boto3==1.39.0 06:12:57 botocore==1.39.0 06:12:57 bs4==0.0.2 06:12:57 cachetools==5.5.2 06:12:57 certifi==2025.6.15 06:12:57 cffi==1.17.1 06:12:57 cfgv==3.4.0 06:12:57 chardet==5.2.0 06:12:57 charset-normalizer==3.4.2 06:12:57 click==8.2.1 06:12:57 cliff==4.10.0 06:12:57 cmd2==2.7.0 06:12:57 cryptography==3.3.2 06:12:57 debtcollector==3.0.0 06:12:57 decorator==5.2.1 06:12:57 defusedxml==0.7.1 06:12:57 Deprecated==1.2.18 06:12:57 distlib==0.3.9 06:12:57 dnspython==2.7.0 06:12:57 docker==7.1.0 06:12:57 dogpile.cache==1.4.0 06:12:57 durationpy==0.10 06:12:57 email_validator==2.2.0 06:12:57 filelock==3.18.0 06:12:57 future==1.0.0 06:12:57 gitdb==4.0.12 06:12:57 GitPython==3.1.44 06:12:57 google-auth==2.40.3 06:12:57 httplib2==0.22.0 06:12:57 identify==2.6.12 06:12:57 idna==3.10 06:12:57 importlib-resources==1.5.0 06:12:57 iso8601==2.1.0 06:12:57 Jinja2==3.1.6 06:12:57 jmespath==1.0.1 06:12:57 jsonpatch==1.33 06:12:57 jsonpointer==3.0.0 06:12:57 jsonschema==4.24.0 06:12:57 jsonschema-specifications==2025.4.1 06:12:57 keystoneauth1==5.11.1 06:12:57 kubernetes==33.1.0 06:12:57 lftools==0.37.13 06:12:57 lxml==6.0.0 06:12:57 markdown-it-py==3.0.0 06:12:57 MarkupSafe==3.0.2 06:12:57 mdurl==0.1.2 06:12:57 msgpack==1.1.1 06:12:57 multi_key_dict==2.0.3 06:12:57 munch==4.0.0 06:12:57 netaddr==1.3.0 06:12:57 niet==1.4.2 06:12:57 nodeenv==1.9.1 06:12:57 oauth2client==4.1.3 06:12:57 oauthlib==3.3.1 06:12:57 openstacksdk==4.6.0 06:12:57 os-client-config==2.1.0 06:12:57 os-service-types==1.7.0 06:12:57 osc-lib==4.0.2 06:12:57 oslo.config==9.8.0 06:12:57 oslo.context==6.0.0 06:12:57 oslo.i18n==6.5.1 06:12:57 oslo.log==7.1.0 06:12:57 oslo.serialization==5.7.0 06:12:57 oslo.utils==9.0.0 06:12:57 packaging==25.0 06:12:57 pbr==6.1.1 06:12:57 platformdirs==4.3.8 06:12:57 prettytable==3.16.0 06:12:57 psutil==7.0.0 06:12:57 pyasn1==0.6.1 06:12:57 pyasn1_modules==0.4.2 06:12:57 pycparser==2.22 06:12:57 pygerrit2==2.0.15 06:12:57 PyGithub==2.6.1 06:12:57 Pygments==2.19.2 06:12:57 PyJWT==2.10.1 06:12:57 PyNaCl==1.5.0 06:12:57 pyparsing==2.4.7 06:12:57 pyperclip==1.9.0 06:12:57 pyrsistent==0.20.0 06:12:57 python-cinderclient==9.7.0 06:12:57 python-dateutil==2.9.0.post0 06:12:57 python-heatclient==4.2.0 06:12:57 python-jenkins==1.8.2 06:12:57 python-keystoneclient==5.6.0 06:12:57 python-magnumclient==4.8.1 06:12:57 python-openstackclient==8.1.0 06:12:57 python-swiftclient==4.8.0 06:12:57 PyYAML==6.0.2 06:12:57 referencing==0.36.2 06:12:57 requests==2.32.4 06:12:57 requests-oauthlib==2.0.0 06:12:57 requestsexceptions==1.4.0 06:12:57 rfc3986==2.0.0 06:12:57 rich==14.0.0 06:12:57 rich-argparse==1.7.1 06:12:57 rpds-py==0.25.1 06:12:57 rsa==4.9.1 06:12:57 ruamel.yaml==0.18.14 06:12:57 ruamel.yaml.clib==0.2.12 06:12:57 s3transfer==0.13.0 06:12:57 simplejson==3.20.1 06:12:57 six==1.17.0 06:12:57 smmap==5.0.2 06:12:57 soupsieve==2.7 06:12:57 stevedore==5.4.1 06:12:57 tabulate==0.9.0 06:12:57 toml==0.10.2 06:12:57 tomlkit==0.13.3 06:12:57 tqdm==4.67.1 06:12:57 typing_extensions==4.14.0 06:12:57 tzdata==2025.2 06:12:57 urllib3==1.26.20 06:12:57 virtualenv==20.31.2 06:12:57 wcwidth==0.2.13 06:12:57 websocket-client==1.8.0 06:12:57 wrapt==1.17.2 06:12:57 xdg==6.0.0 06:12:57 xmltodict==0.14.2 06:12:57 yq==3.4.3 06:12:58 [EnvInject] - Injecting environment variables from a build step. 06:12:58 [EnvInject] - Injecting as environment variables the properties content 06:12:58 PYTHON=python3 06:12:58 06:12:58 [EnvInject] - Variables injected successfully. 06:12:58 [transportpce-tox-verify-transportpce-master] $ /bin/bash -l /tmp/jenkins16940290852326717073.sh 06:12:58 ---> tox-install.sh 06:12:58 + source /home/jenkins/lf-env.sh 06:12:58 + lf-activate-venv --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 06:12:58 ++ mktemp -d /tmp/venv-XXXX 06:12:58 + lf_venv=/tmp/venv-OVva 06:12:58 + local venv_file=/tmp/.os_lf_venv 06:12:58 + local python=python3 06:12:58 + local options 06:12:58 + local set_path=true 06:12:58 + local install_args= 06:12:58 ++ getopt -o np:v: -l no-path,system-site-packages,python:,venv-file: -n lf-activate-venv -- --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 06:12:58 + options=' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 06:12:58 + eval set -- ' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 06:12:58 ++ set -- --venv-file /tmp/.toxenv -- tox virtualenv urllib3~=1.26.15 06:12:58 + true 06:12:58 + case $1 in 06:12:58 + venv_file=/tmp/.toxenv 06:12:58 + shift 2 06:12:58 + true 06:12:58 + case $1 in 06:12:58 + shift 06:12:58 + break 06:12:58 + case $python in 06:12:58 + local pkg_list= 06:12:58 + [[ -d /opt/pyenv ]] 06:12:58 + echo 'Setup pyenv:' 06:12:58 Setup pyenv: 06:12:58 + export PYENV_ROOT=/opt/pyenv 06:12:58 + PYENV_ROOT=/opt/pyenv 06:12:58 + export PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 06:12:58 + PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 06:12:58 + pyenv versions 06:12:58 system 06:12:58 3.8.20 06:12:58 3.9.20 06:12:58 3.10.15 06:12:58 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 06:12:58 + command -v pyenv 06:12:58 ++ pyenv init - --no-rehash 06:12:58 + eval 'PATH="$(bash --norc -ec '\''IFS=:; paths=($PATH); 06:12:58 for i in ${!paths[@]}; do 06:12:58 if [[ ${paths[i]} == "'\'''\''/opt/pyenv/shims'\'''\''" ]]; then unset '\''\'\'''\''paths[i]'\''\'\'''\''; 06:12:58 fi; done; 06:12:58 echo "${paths[*]}"'\'')" 06:12:58 export PATH="/opt/pyenv/shims:${PATH}" 06:12:58 export PYENV_SHELL=bash 06:12:58 source '\''/opt/pyenv/libexec/../completions/pyenv.bash'\'' 06:12:58 pyenv() { 06:12:58 local command 06:12:58 command="${1:-}" 06:12:58 if [ "$#" -gt 0 ]; then 06:12:58 shift 06:12:58 fi 06:12:58 06:12:58 case "$command" in 06:12:58 rehash|shell) 06:12:58 eval "$(pyenv "sh-$command" "$@")" 06:12:58 ;; 06:12:58 *) 06:12:58 command pyenv "$command" "$@" 06:12:58 ;; 06:12:58 esac 06:12:58 }' 06:12:58 +++ bash --norc -ec 'IFS=:; paths=($PATH); 06:12:58 for i in ${!paths[@]}; do 06:12:58 if [[ ${paths[i]} == "/opt/pyenv/shims" ]]; then unset '\''paths[i]'\''; 06:12:58 fi; done; 06:12:58 echo "${paths[*]}"' 06:12:58 ++ PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 06:12:58 ++ export PATH=/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 06:12:58 ++ PATH=/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 06:12:58 ++ export PYENV_SHELL=bash 06:12:58 ++ PYENV_SHELL=bash 06:12:58 ++ source /opt/pyenv/libexec/../completions/pyenv.bash 06:12:58 +++ complete -F _pyenv pyenv 06:12:58 ++ lf-pyver python3 06:12:58 ++ local py_version_xy=python3 06:12:58 ++ local py_version_xyz= 06:12:58 ++ pyenv versions 06:12:58 ++ local command 06:12:58 ++ command=versions 06:12:58 ++ grep -E '^[0-9.]*[0-9]$' 06:12:58 ++ sed 's/^[ *]* //' 06:12:58 ++ '[' 1 -gt 0 ']' 06:12:58 ++ shift 06:12:58 ++ case "$command" in 06:12:58 ++ command pyenv versions 06:12:58 ++ awk '{ print $1 }' 06:12:58 ++ [[ ! -s /tmp/.pyenv_versions ]] 06:12:58 +++ grep '^3' /tmp/.pyenv_versions 06:12:58 +++ sort -V 06:12:58 +++ tail -n 1 06:12:58 ++ py_version_xyz=3.11.10 06:12:58 ++ [[ -z 3.11.10 ]] 06:12:58 ++ echo 3.11.10 06:12:58 ++ return 0 06:12:58 + pyenv local 3.11.10 06:12:58 + local command 06:12:58 + command=local 06:12:58 + '[' 2 -gt 0 ']' 06:12:58 + shift 06:12:58 + case "$command" in 06:12:58 + command pyenv local 3.11.10 06:12:58 + for arg in "$@" 06:12:58 + case $arg in 06:12:58 + pkg_list+='tox ' 06:12:58 + for arg in "$@" 06:12:58 + case $arg in 06:12:58 + pkg_list+='virtualenv ' 06:12:58 + for arg in "$@" 06:12:58 + case $arg in 06:12:58 + pkg_list+='urllib3~=1.26.15 ' 06:12:58 + [[ -f /tmp/.toxenv ]] 06:12:58 + [[ ! -f /tmp/.toxenv ]] 06:12:58 + [[ -n '' ]] 06:12:58 + python3 -m venv /tmp/venv-OVva 06:13:02 + echo 'lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-OVva' 06:13:02 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-OVva 06:13:02 + echo /tmp/venv-OVva 06:13:02 + echo 'lf-activate-venv(): INFO: Save venv in file: /tmp/.toxenv' 06:13:02 lf-activate-venv(): INFO: Save venv in file: /tmp/.toxenv 06:13:02 + /tmp/venv-OVva/bin/python3 -m pip install --upgrade --quiet pip 'setuptools<66' virtualenv 06:13:06 + [[ -z tox virtualenv urllib3~=1.26.15 ]] 06:13:06 + echo 'lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 ' 06:13:06 lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 06:13:06 + /tmp/venv-OVva/bin/python3 -m pip install --upgrade --quiet --upgrade-strategy eager tox virtualenv urllib3~=1.26.15 06:13:08 + type python3 06:13:08 + true 06:13:08 + echo 'lf-activate-venv(): INFO: Adding /tmp/venv-OVva/bin to PATH' 06:13:08 lf-activate-venv(): INFO: Adding /tmp/venv-OVva/bin to PATH 06:13:08 + PATH=/tmp/venv-OVva/bin:/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 06:13:08 + return 0 06:13:08 + python3 --version 06:13:08 Python 3.11.10 06:13:08 + python3 -m pip --version 06:13:08 pip 25.1.1 from /tmp/venv-OVva/lib/python3.11/site-packages/pip (python 3.11) 06:13:08 + python3 -m pip freeze 06:13:08 cachetools==6.1.0 06:13:08 chardet==5.2.0 06:13:08 colorama==0.4.6 06:13:08 distlib==0.3.9 06:13:08 filelock==3.18.0 06:13:08 packaging==25.0 06:13:08 platformdirs==4.3.8 06:13:08 pluggy==1.6.0 06:13:08 pyproject-api==1.9.1 06:13:08 tox==4.27.0 06:13:08 urllib3==1.26.20 06:13:08 virtualenv==20.31.2 06:13:08 [transportpce-tox-verify-transportpce-master] $ /bin/sh -xe /tmp/jenkins5224556530827219976.sh 06:13:08 [EnvInject] - Injecting environment variables from a build step. 06:13:09 [EnvInject] - Injecting as environment variables the properties content 06:13:09 PARALLEL=True 06:13:09 06:13:09 [EnvInject] - Variables injected successfully. 06:13:09 [transportpce-tox-verify-transportpce-master] $ /bin/bash -l /tmp/jenkins4918402265204865734.sh 06:13:09 ---> tox-run.sh 06:13:09 + PATH=/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 06:13:09 + ARCHIVE_TOX_DIR=/w/workspace/transportpce-tox-verify-transportpce-master/archives/tox 06:13:09 + ARCHIVE_DOC_DIR=/w/workspace/transportpce-tox-verify-transportpce-master/archives/docs 06:13:09 + mkdir -p /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox 06:13:09 + cd /w/workspace/transportpce-tox-verify-transportpce-master/. 06:13:09 + source /home/jenkins/lf-env.sh 06:13:09 + lf-activate-venv --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 06:13:09 ++ mktemp -d /tmp/venv-XXXX 06:13:09 + lf_venv=/tmp/venv-8cft 06:13:09 + local venv_file=/tmp/.os_lf_venv 06:13:09 + local python=python3 06:13:09 + local options 06:13:09 + local set_path=true 06:13:09 + local install_args= 06:13:09 ++ getopt -o np:v: -l no-path,system-site-packages,python:,venv-file: -n lf-activate-venv -- --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 06:13:09 + options=' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 06:13:09 + eval set -- ' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 06:13:09 ++ set -- --venv-file /tmp/.toxenv -- tox virtualenv urllib3~=1.26.15 06:13:09 + true 06:13:09 + case $1 in 06:13:09 + venv_file=/tmp/.toxenv 06:13:09 + shift 2 06:13:09 + true 06:13:09 + case $1 in 06:13:09 + shift 06:13:09 + break 06:13:09 + case $python in 06:13:09 + local pkg_list= 06:13:09 + [[ -d /opt/pyenv ]] 06:13:09 + echo 'Setup pyenv:' 06:13:09 Setup pyenv: 06:13:09 + export PYENV_ROOT=/opt/pyenv 06:13:09 + PYENV_ROOT=/opt/pyenv 06:13:09 + export PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 06:13:09 + PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 06:13:09 + pyenv versions 06:13:09 system 06:13:09 3.8.20 06:13:09 3.9.20 06:13:09 3.10.15 06:13:09 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 06:13:09 + command -v pyenv 06:13:09 ++ pyenv init - --no-rehash 06:13:09 + eval 'PATH="$(bash --norc -ec '\''IFS=:; paths=($PATH); 06:13:09 for i in ${!paths[@]}; do 06:13:09 if [[ ${paths[i]} == "'\'''\''/opt/pyenv/shims'\'''\''" ]]; then unset '\''\'\'''\''paths[i]'\''\'\'''\''; 06:13:09 fi; done; 06:13:09 echo "${paths[*]}"'\'')" 06:13:09 export PATH="/opt/pyenv/shims:${PATH}" 06:13:09 export PYENV_SHELL=bash 06:13:09 source '\''/opt/pyenv/libexec/../completions/pyenv.bash'\'' 06:13:09 pyenv() { 06:13:09 local command 06:13:09 command="${1:-}" 06:13:09 if [ "$#" -gt 0 ]; then 06:13:09 shift 06:13:09 fi 06:13:09 06:13:09 case "$command" in 06:13:09 rehash|shell) 06:13:09 eval "$(pyenv "sh-$command" "$@")" 06:13:09 ;; 06:13:09 *) 06:13:09 command pyenv "$command" "$@" 06:13:09 ;; 06:13:09 esac 06:13:09 }' 06:13:09 +++ bash --norc -ec 'IFS=:; paths=($PATH); 06:13:09 for i in ${!paths[@]}; do 06:13:09 if [[ ${paths[i]} == "/opt/pyenv/shims" ]]; then unset '\''paths[i]'\''; 06:13:09 fi; done; 06:13:09 echo "${paths[*]}"' 06:13:09 ++ PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 06:13:09 ++ export PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 06:13:09 ++ PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 06:13:09 ++ export PYENV_SHELL=bash 06:13:09 ++ PYENV_SHELL=bash 06:13:09 ++ source /opt/pyenv/libexec/../completions/pyenv.bash 06:13:09 +++ complete -F _pyenv pyenv 06:13:09 ++ lf-pyver python3 06:13:09 ++ local py_version_xy=python3 06:13:09 ++ local py_version_xyz= 06:13:09 ++ pyenv versions 06:13:09 ++ local command 06:13:09 ++ command=versions 06:13:09 ++ '[' 1 -gt 0 ']' 06:13:09 ++ shift 06:13:09 ++ case "$command" in 06:13:09 ++ command pyenv versions 06:13:09 ++ sed 's/^[ *]* //' 06:13:09 ++ grep -E '^[0-9.]*[0-9]$' 06:13:09 ++ awk '{ print $1 }' 06:13:09 ++ [[ ! -s /tmp/.pyenv_versions ]] 06:13:09 +++ grep '^3' /tmp/.pyenv_versions 06:13:09 +++ sort -V 06:13:09 +++ tail -n 1 06:13:09 ++ py_version_xyz=3.11.10 06:13:09 ++ [[ -z 3.11.10 ]] 06:13:09 ++ echo 3.11.10 06:13:09 ++ return 0 06:13:09 + pyenv local 3.11.10 06:13:09 + local command 06:13:09 + command=local 06:13:09 + '[' 2 -gt 0 ']' 06:13:09 + shift 06:13:09 + case "$command" in 06:13:09 + command pyenv local 3.11.10 06:13:09 + for arg in "$@" 06:13:09 + case $arg in 06:13:09 + pkg_list+='tox ' 06:13:09 + for arg in "$@" 06:13:09 + case $arg in 06:13:09 + pkg_list+='virtualenv ' 06:13:09 + for arg in "$@" 06:13:09 + case $arg in 06:13:09 + pkg_list+='urllib3~=1.26.15 ' 06:13:09 + [[ -f /tmp/.toxenv ]] 06:13:09 ++ cat /tmp/.toxenv 06:13:09 + lf_venv=/tmp/venv-OVva 06:13:09 + echo 'lf-activate-venv(): INFO: Reuse venv:/tmp/venv-OVva from' file:/tmp/.toxenv 06:13:09 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-OVva from file:/tmp/.toxenv 06:13:09 + /tmp/venv-OVva/bin/python3 -m pip install --upgrade --quiet pip 'setuptools<66' virtualenv 06:13:10 + [[ -z tox virtualenv urllib3~=1.26.15 ]] 06:13:10 + echo 'lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 ' 06:13:10 lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 06:13:10 + /tmp/venv-OVva/bin/python3 -m pip install --upgrade --quiet --upgrade-strategy eager tox virtualenv urllib3~=1.26.15 06:13:11 + type python3 06:13:11 + true 06:13:11 + echo 'lf-activate-venv(): INFO: Adding /tmp/venv-OVva/bin to PATH' 06:13:11 lf-activate-venv(): INFO: Adding /tmp/venv-OVva/bin to PATH 06:13:11 + PATH=/tmp/venv-OVva/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 06:13:11 + return 0 06:13:11 + [[ -d /opt/pyenv ]] 06:13:11 + echo '---> Setting up pyenv' 06:13:11 ---> Setting up pyenv 06:13:11 + export PYENV_ROOT=/opt/pyenv 06:13:11 + PYENV_ROOT=/opt/pyenv 06:13:11 + export PATH=/opt/pyenv/bin:/tmp/venv-OVva/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 06:13:11 + PATH=/opt/pyenv/bin:/tmp/venv-OVva/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 06:13:11 ++ pwd 06:13:11 + PYTHONPATH=/w/workspace/transportpce-tox-verify-transportpce-master 06:13:11 + export PYTHONPATH 06:13:11 + export TOX_TESTENV_PASSENV=PYTHONPATH 06:13:11 + TOX_TESTENV_PASSENV=PYTHONPATH 06:13:11 + tox --version 06:13:11 4.27.0 from /tmp/venv-OVva/lib/python3.11/site-packages/tox/__init__.py 06:13:11 + PARALLEL=True 06:13:11 + TOX_OPTIONS_LIST= 06:13:11 + [[ -n '' ]] 06:13:11 + case ${PARALLEL,,} in 06:13:11 + TOX_OPTIONS_LIST=' --parallel auto --parallel-live' 06:13:11 + tox --parallel auto --parallel-live 06:13:11 + tee -a /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tox.log 06:13:13 buildcontroller: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 06:13:13 docs: install_deps> python -I -m pip install -r docs/requirements.txt 06:13:13 docs-linkcheck: install_deps> python -I -m pip install -r docs/requirements.txt 06:13:13 checkbashisms: freeze> python -m pip freeze --all 06:13:13 checkbashisms: pip==25.1.1,setuptools==80.3.1 06:13:13 checkbashisms: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./fixCIcentOS8reposMirrors.sh 06:13:13 checkbashisms: commands[1] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sh -c 'command checkbashisms>/dev/null || sudo yum install -y devscripts-checkbashisms || sudo yum install -y devscripts-minimal || sudo yum install -y devscripts || sudo yum install -y https://archives.fedoraproject.org/pub/archive/fedora/linux/releases/31/Everything/x86_64/os/Packages/d/devscripts-checkbashisms-2.19.6-2.fc31.x86_64.rpm || (echo "checkbashisms command not found - please install it (e.g. sudo apt-get install devscripts | yum install devscripts-minimal )" >&2 && exit 1)' 06:13:13 checkbashisms: commands[2] /w/workspace/transportpce-tox-verify-transportpce-master/tests> find . -not -path '*/\.*' -name '*.sh' -exec checkbashisms -f '{}' + 06:13:15 script ./reflectwarn.sh does not appear to have a #! interpreter line; 06:13:15 you may get strange results 06:13:15 checkbashisms: OK ✔ in 3.26 seconds 06:13:15 pre-commit: install_deps> python -I -m pip install pre-commit 06:13:17 pre-commit: freeze> python -m pip freeze --all 06:13:17 pre-commit: cfgv==3.4.0,distlib==0.3.9,filelock==3.18.0,identify==2.6.12,nodeenv==1.9.1,pip==25.1.1,platformdirs==4.3.8,pre_commit==4.2.0,PyYAML==6.0.2,setuptools==80.3.1,virtualenv==20.31.2 06:13:17 pre-commit: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./fixCIcentOS8reposMirrors.sh 06:13:17 pre-commit: commands[1] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sh -c 'which cpan || sudo yum install -y perl-CPAN || (echo "cpan command not found - please install it (e.g. sudo apt-get install perl-modules | yum install perl-CPAN )" >&2 && exit 1)' 06:13:17 /usr/bin/cpan 06:13:17 pre-commit: commands[2] /w/workspace/transportpce-tox-verify-transportpce-master/tests> pre-commit run --all-files --show-diff-on-failure 06:13:18 [WARNING] hook id `remove-tabs` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 06:13:18 [WARNING] hook id `perltidy` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 06:13:18 [INFO] Initializing environment for https://github.com/pre-commit/pre-commit-hooks. 06:13:18 [WARNING] repo `https://github.com/pre-commit/pre-commit-hooks` uses deprecated stage names (commit, push) which will be removed in a future version. Hint: often `pre-commit autoupdate --repo https://github.com/pre-commit/pre-commit-hooks` will fix this. if it does not -- consider reporting an issue to that repo. 06:13:18 [INFO] Initializing environment for https://github.com/jorisroovers/gitlint. 06:13:18 [INFO] Initializing environment for https://github.com/jorisroovers/gitlint:./gitlint-core[trusted-deps]. 06:13:19 [INFO] Initializing environment for https://github.com/Lucas-C/pre-commit-hooks. 06:13:19 buildcontroller: freeze> python -m pip freeze --all 06:13:19 [INFO] Initializing environment for https://github.com/pre-commit/mirrors-autopep8. 06:13:20 buildcontroller: bcrypt==4.3.0,certifi==2025.6.15,cffi==1.17.1,charset-normalizer==3.4.2,cryptography==45.0.4,dict2xml==1.7.6,idna==3.10,iniconfig==2.1.0,lxml==5.4.0,netconf-client==3.2.0,packaging==25.0,paramiko==3.5.1,pip==25.1.1,pluggy==1.6.0,psutil==7.0.0,pycparser==2.22,Pygments==2.19.2,PyNaCl==1.5.0,pytest==8.4.1,requests==2.32.4,setuptools==80.3.1,urllib3==2.5.0 06:13:20 buildcontroller: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_controller.sh 06:13:20 + update-java-alternatives -l 06:13:20 java-1.11.0-openjdk-amd64 1111 /usr/lib/jvm/java-1.11.0-openjdk-amd64 06:13:20 java-1.17.0-openjdk-amd64 1711 /usr/lib/jvm/java-1.17.0-openjdk-amd64 06:13:20 java-1.21.0-openjdk-amd64 2111 /usr/lib/jvm/java-1.21.0-openjdk-amd64 06:13:20 + sudo update-java-alternatives -s java-1.21.0-openjdk-amd64 06:13:20 update-alternatives: error: no alternatives for jaotc 06:13:20 [INFO] Initializing environment for https://github.com/perltidy/perltidy. 06:13:20 update-alternatives: error: no alternatives for rmic 06:13:20 + + sed -n ;s/.* version "\(.*\)\.\(.*\)\..*".*$/\1/p; 06:13:20 java -version 06:13:20 + JAVA_VER=21 06:13:20 + echo 21 06:13:20 21 06:13:20 + sed -n ;s/javac \(.*\)\.\(.*\)\..*.*$/\1/p; 06:13:20 + javac -version 06:13:20 + JAVAC_VER=21 06:13:20 + echo 21 06:13:20 + [ 21 -ge 21 ] 06:13:20 + [ 21 -ge 21 ] 06:13:20 + echo ok, java is 21 or newer 06:13:20 + wget -nv https://dlcdn.apache.org/maven/maven-3/3.9.10/binaries/apache-maven-3.9.10-bin.tar.gz -P /tmp 06:13:20 21 06:13:20 ok, java is 21 or newer 06:13:20 2025-07-01 06:13:20 URL:https://dlcdn.apache.org/maven/maven-3/3.9.10/binaries/apache-maven-3.9.10-bin.tar.gz [8885210/8885210] -> "/tmp/apache-maven-3.9.10-bin.tar.gz" [1] 06:13:20 + sudo mkdir -p /opt 06:13:20 + sudo tar xf /tmp/apache-maven-3.9.10-bin.tar.gz -C /opt 06:13:20 + sudo ln -s /opt/apache-maven-3.9.10 /opt/maven 06:13:20 + sudo ln -s /opt/maven/bin/mvn /usr/bin/mvn 06:13:20 [INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks. 06:13:20 [INFO] Once installed this environment will be reused. 06:13:20 [INFO] This may take a few minutes... 06:13:20 + mvn --version 06:13:21 Apache Maven 3.9.10 (5f519b97e944483d878815739f519b2eade0a91d) 06:13:21 Maven home: /opt/maven 06:13:21 Java version: 21.0.5, vendor: Ubuntu, runtime: /usr/lib/jvm/java-21-openjdk-amd64 06:13:21 Default locale: en, platform encoding: UTF-8 06:13:21 OS name: "linux", version: "5.15.0-131-generic", arch: "amd64", family: "unix" 06:13:21 NOTE: Picked up JDK_JAVA_OPTIONS: 06:13:21 --add-opens=java.base/java.io=ALL-UNNAMED 06:13:21 --add-opens=java.base/java.lang=ALL-UNNAMED 06:13:21 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 06:13:21 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 06:13:21 --add-opens=java.base/java.net=ALL-UNNAMED 06:13:21 --add-opens=java.base/java.nio=ALL-UNNAMED 06:13:21 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 06:13:21 --add-opens=java.base/java.nio.file=ALL-UNNAMED 06:13:21 --add-opens=java.base/java.util=ALL-UNNAMED 06:13:21 --add-opens=java.base/java.util.jar=ALL-UNNAMED 06:13:21 --add-opens=java.base/java.util.stream=ALL-UNNAMED 06:13:21 --add-opens=java.base/java.util.zip=ALL-UNNAMED 06:13:21 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 06:13:21 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 06:13:21 -Xlog:disable 06:13:26 [INFO] Installing environment for https://github.com/Lucas-C/pre-commit-hooks. 06:13:26 [INFO] Once installed this environment will be reused. 06:13:26 [INFO] This may take a few minutes... 06:13:33 [INFO] Installing environment for https://github.com/pre-commit/mirrors-autopep8. 06:13:33 [INFO] Once installed this environment will be reused. 06:13:33 [INFO] This may take a few minutes... 06:13:36 docs-linkcheck: freeze> python -m pip freeze --all 06:13:36 docs-linkcheck: alabaster==1.0.0,attrs==25.3.0,babel==2.17.0,blockdiag==3.0.0,certifi==2025.6.15,charset-normalizer==3.4.2,contourpy==1.3.2,cycler==0.12.1,docutils==0.21.2,fonttools==4.58.4,funcparserlib==2.0.0a0,future==1.0.0,idna==3.10,imagesize==1.4.1,Jinja2==3.1.6,jsonschema==3.2.0,kiwisolver==1.4.8,lfdocs-conf==0.9.0,MarkupSafe==3.0.2,matplotlib==3.10.3,numpy==2.3.1,nwdiag==3.0.0,packaging==25.0,pillow==11.2.1,pip==25.1.1,Pygments==2.19.2,pyparsing==3.2.3,pyrsistent==0.20.0,python-dateutil==2.9.0.post0,PyYAML==6.0.2,requests==2.32.4,requests-file==1.5.1,roman-numerals-py==3.1.0,seqdiag==3.0.0,setuptools==80.3.1,six==1.17.0,snowballstemmer==3.0.1,Sphinx==8.2.3,sphinx-bootstrap-theme==0.8.1,sphinx-data-viewer==0.1.5,sphinx-rtd-theme==3.0.2,sphinx-tabs==3.4.7,sphinxcontrib-applehelp==2.0.0,sphinxcontrib-blockdiag==3.0.0,sphinxcontrib-devhelp==2.0.0,sphinxcontrib-htmlhelp==2.1.0,sphinxcontrib-jquery==4.1,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-needs==0.7.9,sphinxcontrib-nwdiag==2.0.0,sphinxcontrib-plantuml==0.30,sphinxcontrib-qthelp==2.0.0,sphinxcontrib-seqdiag==3.0.0,sphinxcontrib-serializinghtml==2.0.0,sphinxcontrib-swaggerdoc==0.1.7,urllib3==2.5.0,webcolors==24.11.1 06:13:36 docs-linkcheck: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sphinx-build -q -b linkcheck -d /w/workspace/transportpce-tox-verify-transportpce-master/.tox/docs-linkcheck/tmp/doctrees ../docs/ /w/workspace/transportpce-tox-verify-transportpce-master/docs/_build/linkcheck 06:13:36 docs: freeze> python -m pip freeze --all 06:13:36 docs: alabaster==1.0.0,attrs==25.3.0,babel==2.17.0,blockdiag==3.0.0,certifi==2025.6.15,charset-normalizer==3.4.2,contourpy==1.3.2,cycler==0.12.1,docutils==0.21.2,fonttools==4.58.4,funcparserlib==2.0.0a0,future==1.0.0,idna==3.10,imagesize==1.4.1,Jinja2==3.1.6,jsonschema==3.2.0,kiwisolver==1.4.8,lfdocs-conf==0.9.0,MarkupSafe==3.0.2,matplotlib==3.10.3,numpy==2.3.1,nwdiag==3.0.0,packaging==25.0,pillow==11.2.1,pip==25.1.1,Pygments==2.19.2,pyparsing==3.2.3,pyrsistent==0.20.0,python-dateutil==2.9.0.post0,PyYAML==6.0.2,requests==2.32.4,requests-file==1.5.1,roman-numerals-py==3.1.0,seqdiag==3.0.0,setuptools==80.3.1,six==1.17.0,snowballstemmer==3.0.1,Sphinx==8.2.3,sphinx-bootstrap-theme==0.8.1,sphinx-data-viewer==0.1.5,sphinx-rtd-theme==3.0.2,sphinx-tabs==3.4.7,sphinxcontrib-applehelp==2.0.0,sphinxcontrib-blockdiag==3.0.0,sphinxcontrib-devhelp==2.0.0,sphinxcontrib-htmlhelp==2.1.0,sphinxcontrib-jquery==4.1,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-needs==0.7.9,sphinxcontrib-nwdiag==2.0.0,sphinxcontrib-plantuml==0.30,sphinxcontrib-qthelp==2.0.0,sphinxcontrib-seqdiag==3.0.0,sphinxcontrib-serializinghtml==2.0.0,sphinxcontrib-swaggerdoc==0.1.7,urllib3==2.5.0,webcolors==24.11.1 06:13:36 docs: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sphinx-build -q -W --keep-going -b html -n -d /w/workspace/transportpce-tox-verify-transportpce-master/.tox/docs/tmp/doctrees ../docs/ /w/workspace/transportpce-tox-verify-transportpce-master/docs/_build/html 06:13:38 [INFO] Installing environment for https://github.com/perltidy/perltidy. 06:13:38 [INFO] Once installed this environment will be reused. 06:13:38 [INFO] This may take a few minutes... 06:13:38 docs: OK ✔ in 26.8 seconds 06:13:38 pylint: install_deps> python -I -m pip install 'pylint>=2.6.0' 06:13:41 docs-linkcheck: OK ✔ in 27.19 seconds 06:13:41 pylint: freeze> python -m pip freeze --all 06:13:42 pylint: astroid==3.3.10,dill==0.4.0,isort==6.0.1,mccabe==0.7.0,pip==25.1.1,platformdirs==4.3.8,pylint==3.3.7,setuptools==80.3.1,tomlkit==0.13.3 06:13:42 pylint: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> find transportpce_tests/ -name '*.py' -exec pylint --fail-under=10 --max-line-length=120 --disable=missing-docstring,import-error --disable=fixme --disable=duplicate-code '--module-rgx=([a-z0-9_]+$)|([0-9.]{1,30}$)' '--method-rgx=(([a-z_][a-zA-Z0-9_]{2,})|(_[a-z0-9_]*)|(__[a-zA-Z][a-zA-Z0-9_]+__))$' '--variable-rgx=[a-zA-Z_][a-zA-Z0-9_]{1,30}$' '{}' + 06:13:48 trim trailing whitespace.................................................Passed 06:13:48 Tabs remover.............................................................Passed 06:13:48 autopep8.................................................................Passed 06:13:54 perltidy.................................................................Passed 06:13:55 pre-commit: commands[3] /w/workspace/transportpce-tox-verify-transportpce-master/tests> pre-commit run gitlint-ci --hook-stage manual 06:13:55 [WARNING] hook id `remove-tabs` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 06:13:55 [WARNING] hook id `perltidy` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 06:13:55 [INFO] Installing environment for https://github.com/jorisroovers/gitlint. 06:13:55 [INFO] Once installed this environment will be reused. 06:13:55 [INFO] This may take a few minutes... 06:14:04 gitlint..................................................................Passed 06:14:07 06:14:07 ------------------------------------ 06:14:07 Your code has been rated at 10.00/10 06:14:07 06:15:05 pre-commit: OK ✔ in 49.03 seconds 06:15:05 pylint: OK ✔ in 30.07 seconds 06:15:05 buildcontroller: OK ✔ in 1 minute 52.71 seconds 06:15:05 build_karaf_tests121: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 06:15:05 sims: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 06:15:05 build_karaf_tests221: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 06:15:05 testsPCE: install_deps> python -I -m pip install gnpy4tpce==2.4.7 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 06:15:11 sims: freeze> python -m pip freeze --all 06:15:11 build_karaf_tests221: freeze> python -m pip freeze --all 06:15:11 build_karaf_tests121: freeze> python -m pip freeze --all 06:15:11 sims: bcrypt==4.3.0,certifi==2025.6.15,cffi==1.17.1,charset-normalizer==3.4.2,cryptography==45.0.4,dict2xml==1.7.6,idna==3.10,iniconfig==2.1.0,lxml==5.4.0,netconf-client==3.2.0,packaging==25.0,paramiko==3.5.1,pip==25.1.1,pluggy==1.6.0,psutil==7.0.0,pycparser==2.22,Pygments==2.19.2,PyNaCl==1.5.0,pytest==8.4.1,requests==2.32.4,setuptools==80.3.1,urllib3==2.5.0 06:15:11 sims: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./install_lightynode.sh 06:15:11 Using lighynode version 20.1.0.5 06:15:11 Installing lightynode device to ./lightynode/lightynode-openroadm-device directory 06:15:11 build_karaf_tests221: bcrypt==4.3.0,certifi==2025.6.15,cffi==1.17.1,charset-normalizer==3.4.2,cryptography==45.0.4,dict2xml==1.7.6,idna==3.10,iniconfig==2.1.0,lxml==5.4.0,netconf-client==3.2.0,packaging==25.0,paramiko==3.5.1,pip==25.1.1,pluggy==1.6.0,psutil==7.0.0,pycparser==2.22,Pygments==2.19.2,PyNaCl==1.5.0,pytest==8.4.1,requests==2.32.4,setuptools==80.3.1,urllib3==2.5.0 06:15:11 build_karaf_tests221: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 06:15:11 build_karaf_tests121: bcrypt==4.3.0,certifi==2025.6.15,cffi==1.17.1,charset-normalizer==3.4.2,cryptography==45.0.4,dict2xml==1.7.6,idna==3.10,iniconfig==2.1.0,lxml==5.4.0,netconf-client==3.2.0,packaging==25.0,paramiko==3.5.1,pip==25.1.1,pluggy==1.6.0,psutil==7.0.0,pycparser==2.22,Pygments==2.19.2,PyNaCl==1.5.0,pytest==8.4.1,requests==2.32.4,setuptools==80.3.1,urllib3==2.5.0 06:15:11 build_karaf_tests121: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 06:15:11 NOTE: Picked up JDK_JAVA_OPTIONS: 06:15:11 --add-opens=java.base/java.io=ALL-UNNAMED 06:15:11 --add-opens=java.base/java.lang=ALL-UNNAMED 06:15:11 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 06:15:11 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 06:15:11 --add-opens=java.base/java.net=ALL-UNNAMED 06:15:11 --add-opens=java.base/java.nio=ALL-UNNAMED 06:15:11 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 06:15:11 --add-opens=java.base/java.nio.file=ALL-UNNAMED 06:15:11 --add-opens=java.base/java.util=ALL-UNNAMED 06:15:11 --add-opens=java.base/java.util.jar=ALL-UNNAMED 06:15:11 --add-opens=java.base/java.util.stream=ALL-UNNAMED 06:15:11 --add-opens=java.base/java.util.zip=ALL-UNNAMED 06:15:11 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 06:15:11 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 06:15:11 -Xlog:disable 06:15:11 NOTE: Picked up JDK_JAVA_OPTIONS: 06:15:11 --add-opens=java.base/java.io=ALL-UNNAMED 06:15:11 --add-opens=java.base/java.lang=ALL-UNNAMED 06:15:11 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 06:15:11 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 06:15:11 --add-opens=java.base/java.net=ALL-UNNAMED 06:15:11 --add-opens=java.base/java.nio=ALL-UNNAMED 06:15:11 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 06:15:11 --add-opens=java.base/java.nio.file=ALL-UNNAMED 06:15:11 --add-opens=java.base/java.util=ALL-UNNAMED 06:15:11 --add-opens=java.base/java.util.jar=ALL-UNNAMED 06:15:11 --add-opens=java.base/java.util.stream=ALL-UNNAMED 06:15:11 --add-opens=java.base/java.util.zip=ALL-UNNAMED 06:15:11 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 06:15:11 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 06:15:11 -Xlog:disable 06:15:15 sims: OK ✔ in 10.47 seconds 06:15:15 build_karaf_tests71: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 06:15:28 build_karaf_tests71: freeze> python -m pip freeze --all 06:15:29 build_karaf_tests71: bcrypt==4.3.0,certifi==2025.6.15,cffi==1.17.1,charset-normalizer==3.4.2,cryptography==45.0.4,dict2xml==1.7.6,idna==3.10,iniconfig==2.1.0,lxml==5.4.0,netconf-client==3.2.0,packaging==25.0,paramiko==3.5.1,pip==25.1.1,pluggy==1.6.0,psutil==7.0.0,pycparser==2.22,Pygments==2.19.2,PyNaCl==1.5.0,pytest==8.4.1,requests==2.32.4,setuptools==80.3.1,urllib3==2.5.0 06:15:29 build_karaf_tests71: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 06:15:29 NOTE: Picked up JDK_JAVA_OPTIONS: 06:15:29 --add-opens=java.base/java.io=ALL-UNNAMED 06:15:29 --add-opens=java.base/java.lang=ALL-UNNAMED 06:15:29 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 06:15:29 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 06:15:29 --add-opens=java.base/java.net=ALL-UNNAMED 06:15:29 --add-opens=java.base/java.nio=ALL-UNNAMED 06:15:29 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 06:15:29 --add-opens=java.base/java.nio.file=ALL-UNNAMED 06:15:29 --add-opens=java.base/java.util=ALL-UNNAMED 06:15:29 --add-opens=java.base/java.util.jar=ALL-UNNAMED 06:15:29 --add-opens=java.base/java.util.stream=ALL-UNNAMED 06:15:29 --add-opens=java.base/java.util.zip=ALL-UNNAMED 06:15:29 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 06:15:29 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 06:15:29 -Xlog:disable 06:15:59 build_karaf_tests121: OK ✔ in 53.85 seconds 06:15:59 build_karaf_tests190: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 06:15:59 build_karaf_tests221: OK ✔ in 54.77 seconds 06:15:59 build_karaf_tests_hybrid: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 06:16:07 build_karaf_tests71: OK ✔ in 49.06 seconds 06:16:07 build_karaf_tests190: freeze> python -m pip freeze --all 06:16:07 build_karaf_tests_hybrid: freeze> python -m pip freeze --all 06:16:07 build_karaf_tests190: bcrypt==4.3.0,certifi==2025.6.15,cffi==1.17.1,charset-normalizer==3.4.2,cryptography==45.0.4,dict2xml==1.7.6,idna==3.10,iniconfig==2.1.0,lxml==5.4.0,netconf-client==3.2.0,packaging==25.0,paramiko==3.5.1,pip==25.1.1,pluggy==1.6.0,psutil==7.0.0,pycparser==2.22,Pygments==2.19.2,PyNaCl==1.5.0,pytest==8.4.1,requests==2.32.4,setuptools==80.3.1,urllib3==2.5.0 06:16:07 build_karaf_tests190: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 06:16:07 build_karaf_tests_hybrid: bcrypt==4.3.0,certifi==2025.6.15,cffi==1.17.1,charset-normalizer==3.4.2,cryptography==45.0.4,dict2xml==1.7.6,idna==3.10,iniconfig==2.1.0,lxml==5.4.0,netconf-client==3.2.0,packaging==25.0,paramiko==3.5.1,pip==25.1.1,pluggy==1.6.0,psutil==7.0.0,pycparser==2.22,Pygments==2.19.2,PyNaCl==1.5.0,pytest==8.4.1,requests==2.32.4,setuptools==80.3.1,urllib3==2.5.0 06:16:07 build_karaf_tests_hybrid: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 06:16:07 NOTE: Picked up JDK_JAVA_OPTIONS: 06:16:07 --add-opens=java.base/java.io=ALL-UNNAMED 06:16:07 --add-opens=java.base/java.lang=ALL-UNNAMED 06:16:07 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 06:16:07 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 06:16:07 --add-opens=java.base/java.net=ALL-UNNAMED 06:16:07 --add-opens=java.base/java.nio=ALL-UNNAMED 06:16:07 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 06:16:07 --add-opens=java.base/java.nio.file=ALL-UNNAMED 06:16:07 --add-opens=java.base/java.util=ALL-UNNAMED 06:16:07 --add-opens=java.base/java.util.jar=ALL-UNNAMED 06:16:07 --add-opens=java.base/java.util.stream=ALL-UNNAMED 06:16:07 --add-opens=java.base/java.util.zip=ALL-UNNAMED 06:16:07 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 06:16:07 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 06:16:07 -Xlog:disable 06:16:07 testsPCE: freeze> python -m pip freeze --all 06:16:07 NOTE: Picked up JDK_JAVA_OPTIONS: 06:16:07 --add-opens=java.base/java.io=ALL-UNNAMED 06:16:07 --add-opens=java.base/java.lang=ALL-UNNAMED 06:16:07 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 06:16:07 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 06:16:07 --add-opens=java.base/java.net=ALL-UNNAMED 06:16:07 --add-opens=java.base/java.nio=ALL-UNNAMED 06:16:07 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 06:16:07 --add-opens=java.base/java.nio.file=ALL-UNNAMED 06:16:07 --add-opens=java.base/java.util=ALL-UNNAMED 06:16:07 --add-opens=java.base/java.util.jar=ALL-UNNAMED 06:16:07 --add-opens=java.base/java.util.stream=ALL-UNNAMED 06:16:07 --add-opens=java.base/java.util.zip=ALL-UNNAMED 06:16:07 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 06:16:07 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 06:16:07 -Xlog:disable 06:16:08 testsPCE: bcrypt==4.3.0,certifi==2025.6.15,cffi==1.17.1,charset-normalizer==3.4.2,click==8.2.1,contourpy==1.3.2,cryptography==3.3.2,cycler==0.12.1,dict2xml==1.7.6,Flask==2.1.3,Flask-Injector==0.14.0,fonttools==4.58.4,gnpy4tpce==2.4.7,idna==3.10,iniconfig==2.1.0,injector==0.22.0,itsdangerous==2.2.0,Jinja2==3.1.6,kiwisolver==1.4.8,lxml==5.4.0,MarkupSafe==3.0.2,matplotlib==3.10.3,netconf-client==3.2.0,networkx==2.8.8,numpy==1.26.4,packaging==25.0,pandas==1.5.3,paramiko==3.5.1,pbr==5.11.1,pillow==11.2.1,pip==25.1.1,pluggy==1.6.0,psutil==7.0.0,pycparser==2.22,Pygments==2.19.2,PyNaCl==1.5.0,pyparsing==3.2.3,pytest==8.4.1,python-dateutil==2.9.0.post0,pytz==2025.2,requests==2.32.4,scipy==1.16.0,setuptools==50.3.2,six==1.17.0,urllib3==2.5.0,Werkzeug==2.0.3,xlrd==1.2.0 06:16:08 testsPCE: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh pce 06:16:08 pytest -q transportpce_tests/pce/test01_pce.py 06:16:52 build_karaf_tests190: OK ✔ in 53.52 seconds 06:16:52 tests190: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 06:17:00 build_karaf_tests_hybrid: OK ✔ in 53.71 seconds 06:17:00 tests190: freeze> python -m pip freeze --all 06:17:00 tests190: bcrypt==4.3.0,certifi==2025.6.15,cffi==1.17.1,charset-normalizer==3.4.2,cryptography==45.0.4,dict2xml==1.7.6,idna==3.10,iniconfig==2.1.0,lxml==5.4.0,netconf-client==3.2.0,packaging==25.0,paramiko==3.5.1,pip==25.1.1,pluggy==1.6.0,psutil==7.0.0,pycparser==2.22,Pygments==2.19.2,PyNaCl==1.5.0,pytest==8.4.1,requests==2.32.4,setuptools==80.3.1,urllib3==2.5.0 06:17:00 tests190: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh oc 06:17:00 using environment variables from ./karafoc.env 06:17:00 pytest -q transportpce_tests/oc/test01_portmapping.py 06:17:08 .........FFF.FF.FF.FF.FE [100%] 06:17:38 ==================================== ERRORS ==================================== 06:17:38 _ ERROR at teardown of TransportpceOCPortMappingTesting.test_10_xpdr_device_disconnection _ 06:17:38 06:17:38 self = 06:17:38 06:17:38 def _new_conn(self) -> socket.socket: 06:17:38 """Establish a socket connection and set nodelay settings on it. 06:17:38 06:17:38 :return: New socket connection. 06:17:38 """ 06:17:38 try: 06:17:38 > sock = connection.create_connection( 06:17:38 (self._dns_host, self.port), 06:17:38 self.timeout, 06:17:38 source_address=self.source_address, 06:17:38 socket_options=self.socket_options, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:198: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 06:17:38 raise err 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 address = ('localhost', 8190), timeout = 30, source_address = None 06:17:38 socket_options = [(6, 1, 1)] 06:17:38 06:17:38 def create_connection( 06:17:38 address: tuple[str, int], 06:17:38 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 06:17:38 source_address: tuple[str, int] | None = None, 06:17:38 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 06:17:38 ) -> socket.socket: 06:17:38 """Connect to *address* and return the socket object. 06:17:38 06:17:38 Convenience function. Connect to *address* (a 2-tuple ``(host, 06:17:38 port)``) and return the socket object. Passing the optional 06:17:38 *timeout* parameter will set the timeout on the socket instance 06:17:38 before attempting to connect. If no *timeout* is supplied, the 06:17:38 global default timeout setting returned by :func:`socket.getdefaulttimeout` 06:17:38 is used. If *source_address* is set it must be a tuple of (host, port) 06:17:38 for the socket to bind as a source address before making the connection. 06:17:38 An host of '' or port 0 tells the OS to use the default. 06:17:38 """ 06:17:38 06:17:38 host, port = address 06:17:38 if host.startswith("["): 06:17:38 host = host.strip("[]") 06:17:38 err = None 06:17:38 06:17:38 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 06:17:38 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 06:17:38 # The original create_connection function always returns all records. 06:17:38 family = allowed_gai_family() 06:17:38 06:17:38 try: 06:17:38 host.encode("idna") 06:17:38 except UnicodeError: 06:17:38 raise LocationParseError(f"'{host}', label empty or too long") from None 06:17:38 06:17:38 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 06:17:38 af, socktype, proto, canonname, sa = res 06:17:38 sock = None 06:17:38 try: 06:17:38 sock = socket.socket(af, socktype, proto) 06:17:38 06:17:38 # If provided, set socket level options before connecting. 06:17:38 _set_socket_options(sock, socket_options) 06:17:38 06:17:38 if timeout is not _DEFAULT_TIMEOUT: 06:17:38 sock.settimeout(timeout) 06:17:38 if source_address: 06:17:38 sock.bind(source_address) 06:17:38 > sock.connect(sa) 06:17:38 E ConnectionRefusedError: [Errno 111] Connection refused 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 06:17:38 06:17:38 The above exception was the direct cause of the following exception: 06:17:38 06:17:38 self = 06:17:38 method = 'DELETE' 06:17:38 url = '/rests/data/open-terminal-meta-data:open-terminal-meta-data', body = None 06:17:38 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 06:17:38 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 06:17:38 redirect = False, assert_same_host = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 06:17:38 release_conn = False, chunked = False, body_pos = None, preload_content = False 06:17:38 decode_content = False, response_kw = {} 06:17:38 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/open-terminal-meta-data:open-terminal-meta-data', query=None, fragment=None) 06:17:38 destination_scheme = None, conn = None, release_this_conn = True 06:17:38 http_tunnel_required = False, err = None, clean_exit = False 06:17:38 06:17:38 def urlopen( # type: ignore[override] 06:17:38 self, 06:17:38 method: str, 06:17:38 url: str, 06:17:38 body: _TYPE_BODY | None = None, 06:17:38 headers: typing.Mapping[str, str] | None = None, 06:17:38 retries: Retry | bool | int | None = None, 06:17:38 redirect: bool = True, 06:17:38 assert_same_host: bool = True, 06:17:38 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 06:17:38 pool_timeout: int | None = None, 06:17:38 release_conn: bool | None = None, 06:17:38 chunked: bool = False, 06:17:38 body_pos: _TYPE_BODY_POSITION | None = None, 06:17:38 preload_content: bool = True, 06:17:38 decode_content: bool = True, 06:17:38 **response_kw: typing.Any, 06:17:38 ) -> BaseHTTPResponse: 06:17:38 """ 06:17:38 Get a connection from the pool and perform an HTTP request. This is the 06:17:38 lowest level call for making a request, so you'll need to specify all 06:17:38 the raw details. 06:17:38 06:17:38 .. note:: 06:17:38 06:17:38 More commonly, it's appropriate to use a convenience method 06:17:38 such as :meth:`request`. 06:17:38 06:17:38 .. note:: 06:17:38 06:17:38 `release_conn` will only behave as expected if 06:17:38 `preload_content=False` because we want to make 06:17:38 `preload_content=False` the default behaviour someday soon without 06:17:38 breaking backwards compatibility. 06:17:38 06:17:38 :param method: 06:17:38 HTTP request method (such as GET, POST, PUT, etc.) 06:17:38 06:17:38 :param url: 06:17:38 The URL to perform the request on. 06:17:38 06:17:38 :param body: 06:17:38 Data to send in the request body, either :class:`str`, :class:`bytes`, 06:17:38 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 06:17:38 06:17:38 :param headers: 06:17:38 Dictionary of custom headers to send, such as User-Agent, 06:17:38 If-None-Match, etc. If None, pool headers are used. If provided, 06:17:38 these headers completely replace any pool-specific headers. 06:17:38 06:17:38 :param retries: 06:17:38 Configure the number of retries to allow before raising a 06:17:38 :class:`~urllib3.exceptions.MaxRetryError` exception. 06:17:38 06:17:38 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 06:17:38 :class:`~urllib3.util.retry.Retry` object for fine-grained control 06:17:38 over different types of retries. 06:17:38 Pass an integer number to retry connection errors that many times, 06:17:38 but no other types of errors. Pass zero to never retry. 06:17:38 06:17:38 If ``False``, then retries are disabled and any exception is raised 06:17:38 immediately. Also, instead of raising a MaxRetryError on redirects, 06:17:38 the redirect response will be returned. 06:17:38 06:17:38 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 06:17:38 06:17:38 :param redirect: 06:17:38 If True, automatically handle redirects (status codes 301, 302, 06:17:38 303, 307, 308). Each redirect counts as a retry. Disabling retries 06:17:38 will disable redirect, too. 06:17:38 06:17:38 :param assert_same_host: 06:17:38 If ``True``, will make sure that the host of the pool requests is 06:17:38 consistent else will raise HostChangedError. When ``False``, you can 06:17:38 use the pool on an HTTP proxy and request foreign hosts. 06:17:38 06:17:38 :param timeout: 06:17:38 If specified, overrides the default timeout for this one 06:17:38 request. It may be a float (in seconds) or an instance of 06:17:38 :class:`urllib3.util.Timeout`. 06:17:38 06:17:38 :param pool_timeout: 06:17:38 If set and the pool is set to block=True, then this method will 06:17:38 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 06:17:38 connection is available within the time period. 06:17:38 06:17:38 :param bool preload_content: 06:17:38 If True, the response's body will be preloaded into memory. 06:17:38 06:17:38 :param bool decode_content: 06:17:38 If True, will attempt to decode the body based on the 06:17:38 'content-encoding' header. 06:17:38 06:17:38 :param release_conn: 06:17:38 If False, then the urlopen call will not release the connection 06:17:38 back into the pool once a response is received (but will release if 06:17:38 you read the entire contents of the response such as when 06:17:38 `preload_content=True`). This is useful if you're not preloading 06:17:38 the response's content immediately. You will need to call 06:17:38 ``r.release_conn()`` on the response ``r`` to return the connection 06:17:38 back into the pool. If None, it takes the value of ``preload_content`` 06:17:38 which defaults to ``True``. 06:17:38 06:17:38 :param bool chunked: 06:17:38 If True, urllib3 will send the body using chunked transfer 06:17:38 encoding. Otherwise, urllib3 will send the body using the standard 06:17:38 content-length form. Defaults to False. 06:17:38 06:17:38 :param int body_pos: 06:17:38 Position to seek to in file-like body in the event of a retry or 06:17:38 redirect. Typically this won't need to be set because urllib3 will 06:17:38 auto-populate the value when needed. 06:17:38 """ 06:17:38 parsed_url = parse_url(url) 06:17:38 destination_scheme = parsed_url.scheme 06:17:38 06:17:38 if headers is None: 06:17:38 headers = self.headers 06:17:38 06:17:38 if not isinstance(retries, Retry): 06:17:38 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 06:17:38 06:17:38 if release_conn is None: 06:17:38 release_conn = preload_content 06:17:38 06:17:38 # Check host 06:17:38 if assert_same_host and not self.is_same_host(url): 06:17:38 raise HostChangedError(self, url, retries) 06:17:38 06:17:38 # Ensure that the URL we're connecting to is properly encoded 06:17:38 if url.startswith("/"): 06:17:38 url = to_str(_encode_target(url)) 06:17:38 else: 06:17:38 url = to_str(parsed_url.url) 06:17:38 06:17:38 conn = None 06:17:38 06:17:38 # Track whether `conn` needs to be released before 06:17:38 # returning/raising/recursing. Update this variable if necessary, and 06:17:38 # leave `release_conn` constant throughout the function. That way, if 06:17:38 # the function recurses, the original value of `release_conn` will be 06:17:38 # passed down into the recursive call, and its value will be respected. 06:17:38 # 06:17:38 # See issue #651 [1] for details. 06:17:38 # 06:17:38 # [1] 06:17:38 release_this_conn = release_conn 06:17:38 06:17:38 http_tunnel_required = connection_requires_http_tunnel( 06:17:38 self.proxy, self.proxy_config, destination_scheme 06:17:38 ) 06:17:38 06:17:38 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 06:17:38 # have to copy the headers dict so we can safely change it without those 06:17:38 # changes being reflected in anyone else's copy. 06:17:38 if not http_tunnel_required: 06:17:38 headers = headers.copy() # type: ignore[attr-defined] 06:17:38 headers.update(self.proxy_headers) # type: ignore[union-attr] 06:17:38 06:17:38 # Must keep the exception bound to a separate variable or else Python 3 06:17:38 # complains about UnboundLocalError. 06:17:38 err = None 06:17:38 06:17:38 # Keep track of whether we cleanly exited the except block. This 06:17:38 # ensures we do proper cleanup in finally. 06:17:38 clean_exit = False 06:17:38 06:17:38 # Rewind body position, if needed. Record current position 06:17:38 # for future rewinds in the event of a redirect/retry. 06:17:38 body_pos = set_file_position(body, body_pos) 06:17:38 06:17:38 try: 06:17:38 # Request a connection from the queue. 06:17:38 timeout_obj = self._get_timeout(timeout) 06:17:38 conn = self._get_conn(timeout=pool_timeout) 06:17:38 06:17:38 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 06:17:38 06:17:38 # Is this a closed/new connection that requires CONNECT tunnelling? 06:17:38 if self.proxy is not None and http_tunnel_required and conn.is_closed: 06:17:38 try: 06:17:38 self._prepare_proxy(conn) 06:17:38 except (BaseSSLError, OSError, SocketTimeout) as e: 06:17:38 self._raise_timeout( 06:17:38 err=e, url=self.proxy.url, timeout_value=conn.timeout 06:17:38 ) 06:17:38 raise 06:17:38 06:17:38 # If we're going to release the connection in ``finally:``, then 06:17:38 # the response doesn't need to know about the connection. Otherwise 06:17:38 # it will also try to release it and we'll have a double-release 06:17:38 # mess. 06:17:38 response_conn = conn if not release_conn else None 06:17:38 06:17:38 # Make the request on the HTTPConnection object 06:17:38 > response = self._make_request( 06:17:38 conn, 06:17:38 method, 06:17:38 url, 06:17:38 timeout=timeout_obj, 06:17:38 body=body, 06:17:38 headers=headers, 06:17:38 chunked=chunked, 06:17:38 retries=retries, 06:17:38 response_conn=response_conn, 06:17:38 preload_content=preload_content, 06:17:38 decode_content=decode_content, 06:17:38 **response_kw, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 06:17:38 conn.request( 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:494: in request 06:17:38 self.endheaders() 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 06:17:38 self._send_output(message_body, encode_chunked=encode_chunked) 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 06:17:38 self.send(msg) 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 06:17:38 self.connect() 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 06:17:38 self.sock = self._new_conn() 06:17:38 ^^^^^^^^^^^^^^^^ 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = 06:17:38 06:17:38 def _new_conn(self) -> socket.socket: 06:17:38 """Establish a socket connection and set nodelay settings on it. 06:17:38 06:17:38 :return: New socket connection. 06:17:38 """ 06:17:38 try: 06:17:38 sock = connection.create_connection( 06:17:38 (self._dns_host, self.port), 06:17:38 self.timeout, 06:17:38 source_address=self.source_address, 06:17:38 socket_options=self.socket_options, 06:17:38 ) 06:17:38 except socket.gaierror as e: 06:17:38 raise NameResolutionError(self.host, self, e) from e 06:17:38 except SocketTimeout as e: 06:17:38 raise ConnectTimeoutError( 06:17:38 self, 06:17:38 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 06:17:38 ) from e 06:17:38 06:17:38 except OSError as e: 06:17:38 > raise NewConnectionError( 06:17:38 self, f"Failed to establish a new connection: {e}" 06:17:38 ) from e 06:17:38 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 06:17:38 06:17:38 The above exception was the direct cause of the following exception: 06:17:38 06:17:38 self = 06:17:38 request = , stream = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 06:17:38 proxies = OrderedDict() 06:17:38 06:17:38 def send( 06:17:38 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 06:17:38 ): 06:17:38 """Sends PreparedRequest object. Returns Response object. 06:17:38 06:17:38 :param request: The :class:`PreparedRequest ` being sent. 06:17:38 :param stream: (optional) Whether to stream the request content. 06:17:38 :param timeout: (optional) How long to wait for the server to send 06:17:38 data before giving up, as a float, or a :ref:`(connect timeout, 06:17:38 read timeout) ` tuple. 06:17:38 :type timeout: float or tuple or urllib3 Timeout object 06:17:38 :param verify: (optional) Either a boolean, in which case it controls whether 06:17:38 we verify the server's TLS certificate, or a string, in which case it 06:17:38 must be a path to a CA bundle to use 06:17:38 :param cert: (optional) Any user-provided SSL certificate to be trusted. 06:17:38 :param proxies: (optional) The proxies dictionary to apply to the request. 06:17:38 :rtype: requests.Response 06:17:38 """ 06:17:38 06:17:38 try: 06:17:38 conn = self.get_connection_with_tls_context( 06:17:38 request, verify, proxies=proxies, cert=cert 06:17:38 ) 06:17:38 except LocationValueError as e: 06:17:38 raise InvalidURL(e, request=request) 06:17:38 06:17:38 self.cert_verify(conn, request.url, verify, cert) 06:17:38 url = self.request_url(request, proxies) 06:17:38 self.add_headers( 06:17:38 request, 06:17:38 stream=stream, 06:17:38 timeout=timeout, 06:17:38 verify=verify, 06:17:38 cert=cert, 06:17:38 proxies=proxies, 06:17:38 ) 06:17:38 06:17:38 chunked = not (request.body is None or "Content-Length" in request.headers) 06:17:38 06:17:38 if isinstance(timeout, tuple): 06:17:38 try: 06:17:38 connect, read = timeout 06:17:38 timeout = TimeoutSauce(connect=connect, read=read) 06:17:38 except ValueError: 06:17:38 raise ValueError( 06:17:38 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 06:17:38 f"or a single float to set both timeouts to the same value." 06:17:38 ) 06:17:38 elif isinstance(timeout, TimeoutSauce): 06:17:38 pass 06:17:38 else: 06:17:38 timeout = TimeoutSauce(connect=timeout, read=timeout) 06:17:38 06:17:38 try: 06:17:38 > resp = conn.urlopen( 06:17:38 method=request.method, 06:17:38 url=url, 06:17:38 body=request.body, 06:17:38 headers=request.headers, 06:17:38 redirect=False, 06:17:38 assert_same_host=False, 06:17:38 preload_content=False, 06:17:38 decode_content=False, 06:17:38 retries=self.max_retries, 06:17:38 timeout=timeout, 06:17:38 chunked=chunked, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/adapters.py:667: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 06:17:38 retries = retries.increment( 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 06:17:38 method = 'DELETE' 06:17:38 url = '/rests/data/open-terminal-meta-data:open-terminal-meta-data' 06:17:38 response = None 06:17:38 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 06:17:38 _pool = 06:17:38 _stacktrace = 06:17:38 06:17:38 def increment( 06:17:38 self, 06:17:38 method: str | None = None, 06:17:38 url: str | None = None, 06:17:38 response: BaseHTTPResponse | None = None, 06:17:38 error: Exception | None = None, 06:17:38 _pool: ConnectionPool | None = None, 06:17:38 _stacktrace: TracebackType | None = None, 06:17:38 ) -> Self: 06:17:38 """Return a new Retry object with incremented retry counters. 06:17:38 06:17:38 :param response: A response object, or None, if the server did not 06:17:38 return a response. 06:17:38 :type response: :class:`~urllib3.response.BaseHTTPResponse` 06:17:38 :param Exception error: An error encountered during the request, or 06:17:38 None if the response was received successfully. 06:17:38 06:17:38 :return: A new ``Retry`` object. 06:17:38 """ 06:17:38 if self.total is False and error: 06:17:38 # Disabled, indicate to re-raise the error. 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 06:17:38 total = self.total 06:17:38 if total is not None: 06:17:38 total -= 1 06:17:38 06:17:38 connect = self.connect 06:17:38 read = self.read 06:17:38 redirect = self.redirect 06:17:38 status_count = self.status 06:17:38 other = self.other 06:17:38 cause = "unknown" 06:17:38 status = None 06:17:38 redirect_location = None 06:17:38 06:17:38 if error and self._is_connection_error(error): 06:17:38 # Connect retry? 06:17:38 if connect is False: 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 elif connect is not None: 06:17:38 connect -= 1 06:17:38 06:17:38 elif error and self._is_read_error(error): 06:17:38 # Read retry? 06:17:38 if read is False or method is None or not self._is_method_retryable(method): 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 elif read is not None: 06:17:38 read -= 1 06:17:38 06:17:38 elif error: 06:17:38 # Other retry? 06:17:38 if other is not None: 06:17:38 other -= 1 06:17:38 06:17:38 elif response and response.get_redirect_location(): 06:17:38 # Redirect retry? 06:17:38 if redirect is not None: 06:17:38 redirect -= 1 06:17:38 cause = "too many redirects" 06:17:38 response_redirect_location = response.get_redirect_location() 06:17:38 if response_redirect_location: 06:17:38 redirect_location = response_redirect_location 06:17:38 status = response.status 06:17:38 06:17:38 else: 06:17:38 # Incrementing because of a server error like a 500 in 06:17:38 # status_forcelist and the given method is in the allowed_methods 06:17:38 cause = ResponseError.GENERIC_ERROR 06:17:38 if response and response.status: 06:17:38 if status_count is not None: 06:17:38 status_count -= 1 06:17:38 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 06:17:38 status = response.status 06:17:38 06:17:38 history = self.history + ( 06:17:38 RequestHistory(method, url, error, status, redirect_location), 06:17:38 ) 06:17:38 06:17:38 new_retry = self.new( 06:17:38 total=total, 06:17:38 connect=connect, 06:17:38 read=read, 06:17:38 redirect=redirect, 06:17:38 status=status_count, 06:17:38 other=other, 06:17:38 history=history, 06:17:38 ) 06:17:38 06:17:38 if new_retry.is_exhausted(): 06:17:38 reason = error or ResponseError(cause) 06:17:38 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8190): Max retries exceeded with url: /rests/data/open-terminal-meta-data:open-terminal-meta-data (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 06:17:38 06:17:38 During handling of the above exception, another exception occurred: 06:17:38 06:17:38 cls = 06:17:38 06:17:38 @classmethod 06:17:38 def tearDownClass(cls): 06:17:38 # pylint: disable=not-an-iterable 06:17:38 > test_utils_oc.del_metadata() 06:17:38 06:17:38 transportpce_tests/oc/test01_portmapping.py:35: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 transportpce_tests/common/test_utils_oc.py:287: in del_metadata 06:17:38 response = test_utils.delete_request(url[test_utils.RESTCONF_VERSION].format('{}')) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 transportpce_tests/common/test_utils.py:133: in delete_request 06:17:38 return requests.request( 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/api.py:59: in request 06:17:38 return session.request(method=method, url=url, **kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/sessions.py:589: in request 06:17:38 resp = self.send(prep, **send_kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/sessions.py:703: in send 06:17:38 r = adapter.send(request, **kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = 06:17:38 request = , stream = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 06:17:38 proxies = OrderedDict() 06:17:38 06:17:38 def send( 06:17:38 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 06:17:38 ): 06:17:38 """Sends PreparedRequest object. Returns Response object. 06:17:38 06:17:38 :param request: The :class:`PreparedRequest ` being sent. 06:17:38 :param stream: (optional) Whether to stream the request content. 06:17:38 :param timeout: (optional) How long to wait for the server to send 06:17:38 data before giving up, as a float, or a :ref:`(connect timeout, 06:17:38 read timeout) ` tuple. 06:17:38 :type timeout: float or tuple or urllib3 Timeout object 06:17:38 :param verify: (optional) Either a boolean, in which case it controls whether 06:17:38 we verify the server's TLS certificate, or a string, in which case it 06:17:38 must be a path to a CA bundle to use 06:17:38 :param cert: (optional) Any user-provided SSL certificate to be trusted. 06:17:38 :param proxies: (optional) The proxies dictionary to apply to the request. 06:17:38 :rtype: requests.Response 06:17:38 """ 06:17:38 06:17:38 try: 06:17:38 conn = self.get_connection_with_tls_context( 06:17:38 request, verify, proxies=proxies, cert=cert 06:17:38 ) 06:17:38 except LocationValueError as e: 06:17:38 raise InvalidURL(e, request=request) 06:17:38 06:17:38 self.cert_verify(conn, request.url, verify, cert) 06:17:38 url = self.request_url(request, proxies) 06:17:38 self.add_headers( 06:17:38 request, 06:17:38 stream=stream, 06:17:38 timeout=timeout, 06:17:38 verify=verify, 06:17:38 cert=cert, 06:17:38 proxies=proxies, 06:17:38 ) 06:17:38 06:17:38 chunked = not (request.body is None or "Content-Length" in request.headers) 06:17:38 06:17:38 if isinstance(timeout, tuple): 06:17:38 try: 06:17:38 connect, read = timeout 06:17:38 timeout = TimeoutSauce(connect=connect, read=read) 06:17:38 except ValueError: 06:17:38 raise ValueError( 06:17:38 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 06:17:38 f"or a single float to set both timeouts to the same value." 06:17:38 ) 06:17:38 elif isinstance(timeout, TimeoutSauce): 06:17:38 pass 06:17:38 else: 06:17:38 timeout = TimeoutSauce(connect=timeout, read=timeout) 06:17:38 06:17:38 try: 06:17:38 resp = conn.urlopen( 06:17:38 method=request.method, 06:17:38 url=url, 06:17:38 body=request.body, 06:17:38 headers=request.headers, 06:17:38 redirect=False, 06:17:38 assert_same_host=False, 06:17:38 preload_content=False, 06:17:38 decode_content=False, 06:17:38 retries=self.max_retries, 06:17:38 timeout=timeout, 06:17:38 chunked=chunked, 06:17:38 ) 06:17:38 06:17:38 except (ProtocolError, OSError) as err: 06:17:38 raise ConnectionError(err, request=request) 06:17:38 06:17:38 except MaxRetryError as e: 06:17:38 if isinstance(e.reason, ConnectTimeoutError): 06:17:38 # TODO: Remove this in 3.0.0: see #2811 06:17:38 if not isinstance(e.reason, NewConnectionError): 06:17:38 raise ConnectTimeout(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, ResponseError): 06:17:38 raise RetryError(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, _ProxyError): 06:17:38 raise ProxyError(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, _SSLError): 06:17:38 # This branch is for urllib3 v1.22 and later. 06:17:38 raise SSLError(e, request=request) 06:17:38 06:17:38 > raise ConnectionError(e, request=request) 06:17:38 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8190): Max retries exceeded with url: /rests/data/open-terminal-meta-data:open-terminal-meta-data (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 06:17:38 ----------------------------- Captured stdout call ----------------------------- 06:17:38 execution of test_10_xpdr_device_disconnection 06:17:38 =================================== FAILURES =================================== 06:17:38 _________ TransportpceOCPortMappingTesting.test_01_meta_data_insertion _________ 06:17:38 06:17:38 self = 06:17:38 06:17:38 def _new_conn(self) -> socket.socket: 06:17:38 """Establish a socket connection and set nodelay settings on it. 06:17:38 06:17:38 :return: New socket connection. 06:17:38 """ 06:17:38 try: 06:17:38 > sock = connection.create_connection( 06:17:38 (self._dns_host, self.port), 06:17:38 self.timeout, 06:17:38 source_address=self.source_address, 06:17:38 socket_options=self.socket_options, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:198: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 06:17:38 raise err 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 address = ('localhost', 8190), timeout = 30, source_address = None 06:17:38 socket_options = [(6, 1, 1)] 06:17:38 06:17:38 def create_connection( 06:17:38 address: tuple[str, int], 06:17:38 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 06:17:38 source_address: tuple[str, int] | None = None, 06:17:38 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 06:17:38 ) -> socket.socket: 06:17:38 """Connect to *address* and return the socket object. 06:17:38 06:17:38 Convenience function. Connect to *address* (a 2-tuple ``(host, 06:17:38 port)``) and return the socket object. Passing the optional 06:17:38 *timeout* parameter will set the timeout on the socket instance 06:17:38 before attempting to connect. If no *timeout* is supplied, the 06:17:38 global default timeout setting returned by :func:`socket.getdefaulttimeout` 06:17:38 is used. If *source_address* is set it must be a tuple of (host, port) 06:17:38 for the socket to bind as a source address before making the connection. 06:17:38 An host of '' or port 0 tells the OS to use the default. 06:17:38 """ 06:17:38 06:17:38 host, port = address 06:17:38 if host.startswith("["): 06:17:38 host = host.strip("[]") 06:17:38 err = None 06:17:38 06:17:38 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 06:17:38 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 06:17:38 # The original create_connection function always returns all records. 06:17:38 family = allowed_gai_family() 06:17:38 06:17:38 try: 06:17:38 host.encode("idna") 06:17:38 except UnicodeError: 06:17:38 raise LocationParseError(f"'{host}', label empty or too long") from None 06:17:38 06:17:38 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 06:17:38 af, socktype, proto, canonname, sa = res 06:17:38 sock = None 06:17:38 try: 06:17:38 sock = socket.socket(af, socktype, proto) 06:17:38 06:17:38 # If provided, set socket level options before connecting. 06:17:38 _set_socket_options(sock, socket_options) 06:17:38 06:17:38 if timeout is not _DEFAULT_TIMEOUT: 06:17:38 sock.settimeout(timeout) 06:17:38 if source_address: 06:17:38 sock.bind(source_address) 06:17:38 > sock.connect(sa) 06:17:38 E ConnectionRefusedError: [Errno 111] Connection refused 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 06:17:38 06:17:38 The above exception was the direct cause of the following exception: 06:17:38 06:17:38 self = 06:17:38 method = 'PUT' 06:17:38 url = '/rests/data/open-terminal-meta-data:open-terminal-meta-data' 06:17:38 body = '{"open-terminal-meta-data:open-terminal-meta-data": {"transceiver-info": {"transceiver": [{"part-no": "Transceiver-pa...nnectable-port": [5, 2]}, {"nbl-id": 3, "connectable-port": [5, 3]}, {"nbl-id": 4, "connectable-port": [5, 4]}]}]}]}}}' 06:17:38 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '1801', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 06:17:38 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 06:17:38 redirect = False, assert_same_host = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 06:17:38 release_conn = False, chunked = False, body_pos = None, preload_content = False 06:17:38 decode_content = False, response_kw = {} 06:17:38 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/open-terminal-meta-data:open-terminal-meta-data', query=None, fragment=None) 06:17:38 destination_scheme = None, conn = None, release_this_conn = True 06:17:38 http_tunnel_required = False, err = None, clean_exit = False 06:17:38 06:17:38 def urlopen( # type: ignore[override] 06:17:38 self, 06:17:38 method: str, 06:17:38 url: str, 06:17:38 body: _TYPE_BODY | None = None, 06:17:38 headers: typing.Mapping[str, str] | None = None, 06:17:38 retries: Retry | bool | int | None = None, 06:17:38 redirect: bool = True, 06:17:38 assert_same_host: bool = True, 06:17:38 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 06:17:38 pool_timeout: int | None = None, 06:17:38 release_conn: bool | None = None, 06:17:38 chunked: bool = False, 06:17:38 body_pos: _TYPE_BODY_POSITION | None = None, 06:17:38 preload_content: bool = True, 06:17:38 decode_content: bool = True, 06:17:38 **response_kw: typing.Any, 06:17:38 ) -> BaseHTTPResponse: 06:17:38 """ 06:17:38 Get a connection from the pool and perform an HTTP request. This is the 06:17:38 lowest level call for making a request, so you'll need to specify all 06:17:38 the raw details. 06:17:38 06:17:38 .. note:: 06:17:38 06:17:38 More commonly, it's appropriate to use a convenience method 06:17:38 such as :meth:`request`. 06:17:38 06:17:38 .. note:: 06:17:38 06:17:38 `release_conn` will only behave as expected if 06:17:38 `preload_content=False` because we want to make 06:17:38 `preload_content=False` the default behaviour someday soon without 06:17:38 breaking backwards compatibility. 06:17:38 06:17:38 :param method: 06:17:38 HTTP request method (such as GET, POST, PUT, etc.) 06:17:38 06:17:38 :param url: 06:17:38 The URL to perform the request on. 06:17:38 06:17:38 :param body: 06:17:38 Data to send in the request body, either :class:`str`, :class:`bytes`, 06:17:38 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 06:17:38 06:17:38 :param headers: 06:17:38 Dictionary of custom headers to send, such as User-Agent, 06:17:38 If-None-Match, etc. If None, pool headers are used. If provided, 06:17:38 these headers completely replace any pool-specific headers. 06:17:38 06:17:38 :param retries: 06:17:38 Configure the number of retries to allow before raising a 06:17:38 :class:`~urllib3.exceptions.MaxRetryError` exception. 06:17:38 06:17:38 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 06:17:38 :class:`~urllib3.util.retry.Retry` object for fine-grained control 06:17:38 over different types of retries. 06:17:38 Pass an integer number to retry connection errors that many times, 06:17:38 but no other types of errors. Pass zero to never retry. 06:17:38 06:17:38 If ``False``, then retries are disabled and any exception is raised 06:17:38 immediately. Also, instead of raising a MaxRetryError on redirects, 06:17:38 the redirect response will be returned. 06:17:38 06:17:38 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 06:17:38 06:17:38 :param redirect: 06:17:38 If True, automatically handle redirects (status codes 301, 302, 06:17:38 303, 307, 308). Each redirect counts as a retry. Disabling retries 06:17:38 will disable redirect, too. 06:17:38 06:17:38 :param assert_same_host: 06:17:38 If ``True``, will make sure that the host of the pool requests is 06:17:38 consistent else will raise HostChangedError. When ``False``, you can 06:17:38 use the pool on an HTTP proxy and request foreign hosts. 06:17:38 06:17:38 :param timeout: 06:17:38 If specified, overrides the default timeout for this one 06:17:38 request. It may be a float (in seconds) or an instance of 06:17:38 :class:`urllib3.util.Timeout`. 06:17:38 06:17:38 :param pool_timeout: 06:17:38 If set and the pool is set to block=True, then this method will 06:17:38 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 06:17:38 connection is available within the time period. 06:17:38 06:17:38 :param bool preload_content: 06:17:38 If True, the response's body will be preloaded into memory. 06:17:38 06:17:38 :param bool decode_content: 06:17:38 If True, will attempt to decode the body based on the 06:17:38 'content-encoding' header. 06:17:38 06:17:38 :param release_conn: 06:17:38 If False, then the urlopen call will not release the connection 06:17:38 back into the pool once a response is received (but will release if 06:17:38 you read the entire contents of the response such as when 06:17:38 `preload_content=True`). This is useful if you're not preloading 06:17:38 the response's content immediately. You will need to call 06:17:38 ``r.release_conn()`` on the response ``r`` to return the connection 06:17:38 back into the pool. If None, it takes the value of ``preload_content`` 06:17:38 which defaults to ``True``. 06:17:38 06:17:38 :param bool chunked: 06:17:38 If True, urllib3 will send the body using chunked transfer 06:17:38 encoding. Otherwise, urllib3 will send the body using the standard 06:17:38 content-length form. Defaults to False. 06:17:38 06:17:38 :param int body_pos: 06:17:38 Position to seek to in file-like body in the event of a retry or 06:17:38 redirect. Typically this won't need to be set because urllib3 will 06:17:38 auto-populate the value when needed. 06:17:38 """ 06:17:38 parsed_url = parse_url(url) 06:17:38 destination_scheme = parsed_url.scheme 06:17:38 06:17:38 if headers is None: 06:17:38 headers = self.headers 06:17:38 06:17:38 if not isinstance(retries, Retry): 06:17:38 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 06:17:38 06:17:38 if release_conn is None: 06:17:38 release_conn = preload_content 06:17:38 06:17:38 # Check host 06:17:38 if assert_same_host and not self.is_same_host(url): 06:17:38 raise HostChangedError(self, url, retries) 06:17:38 06:17:38 # Ensure that the URL we're connecting to is properly encoded 06:17:38 if url.startswith("/"): 06:17:38 url = to_str(_encode_target(url)) 06:17:38 else: 06:17:38 url = to_str(parsed_url.url) 06:17:38 06:17:38 conn = None 06:17:38 06:17:38 # Track whether `conn` needs to be released before 06:17:38 # returning/raising/recursing. Update this variable if necessary, and 06:17:38 # leave `release_conn` constant throughout the function. That way, if 06:17:38 # the function recurses, the original value of `release_conn` will be 06:17:38 # passed down into the recursive call, and its value will be respected. 06:17:38 # 06:17:38 # See issue #651 [1] for details. 06:17:38 # 06:17:38 # [1] 06:17:38 release_this_conn = release_conn 06:17:38 06:17:38 http_tunnel_required = connection_requires_http_tunnel( 06:17:38 self.proxy, self.proxy_config, destination_scheme 06:17:38 ) 06:17:38 06:17:38 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 06:17:38 # have to copy the headers dict so we can safely change it without those 06:17:38 # changes being reflected in anyone else's copy. 06:17:38 if not http_tunnel_required: 06:17:38 headers = headers.copy() # type: ignore[attr-defined] 06:17:38 headers.update(self.proxy_headers) # type: ignore[union-attr] 06:17:38 06:17:38 # Must keep the exception bound to a separate variable or else Python 3 06:17:38 # complains about UnboundLocalError. 06:17:38 err = None 06:17:38 06:17:38 # Keep track of whether we cleanly exited the except block. This 06:17:38 # ensures we do proper cleanup in finally. 06:17:38 clean_exit = False 06:17:38 06:17:38 # Rewind body position, if needed. Record current position 06:17:38 # for future rewinds in the event of a redirect/retry. 06:17:38 body_pos = set_file_position(body, body_pos) 06:17:38 06:17:38 try: 06:17:38 # Request a connection from the queue. 06:17:38 timeout_obj = self._get_timeout(timeout) 06:17:38 conn = self._get_conn(timeout=pool_timeout) 06:17:38 06:17:38 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 06:17:38 06:17:38 # Is this a closed/new connection that requires CONNECT tunnelling? 06:17:38 if self.proxy is not None and http_tunnel_required and conn.is_closed: 06:17:38 try: 06:17:38 self._prepare_proxy(conn) 06:17:38 except (BaseSSLError, OSError, SocketTimeout) as e: 06:17:38 self._raise_timeout( 06:17:38 err=e, url=self.proxy.url, timeout_value=conn.timeout 06:17:38 ) 06:17:38 raise 06:17:38 06:17:38 # If we're going to release the connection in ``finally:``, then 06:17:38 # the response doesn't need to know about the connection. Otherwise 06:17:38 # it will also try to release it and we'll have a double-release 06:17:38 # mess. 06:17:38 response_conn = conn if not release_conn else None 06:17:38 06:17:38 # Make the request on the HTTPConnection object 06:17:38 > response = self._make_request( 06:17:38 conn, 06:17:38 method, 06:17:38 url, 06:17:38 timeout=timeout_obj, 06:17:38 body=body, 06:17:38 headers=headers, 06:17:38 chunked=chunked, 06:17:38 retries=retries, 06:17:38 response_conn=response_conn, 06:17:38 preload_content=preload_content, 06:17:38 decode_content=decode_content, 06:17:38 **response_kw, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 06:17:38 conn.request( 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:494: in request 06:17:38 self.endheaders() 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 06:17:38 self._send_output(message_body, encode_chunked=encode_chunked) 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 06:17:38 self.send(msg) 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 06:17:38 self.connect() 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 06:17:38 self.sock = self._new_conn() 06:17:38 ^^^^^^^^^^^^^^^^ 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = 06:17:38 06:17:38 def _new_conn(self) -> socket.socket: 06:17:38 """Establish a socket connection and set nodelay settings on it. 06:17:38 06:17:38 :return: New socket connection. 06:17:38 """ 06:17:38 try: 06:17:38 sock = connection.create_connection( 06:17:38 (self._dns_host, self.port), 06:17:38 self.timeout, 06:17:38 source_address=self.source_address, 06:17:38 socket_options=self.socket_options, 06:17:38 ) 06:17:38 except socket.gaierror as e: 06:17:38 raise NameResolutionError(self.host, self, e) from e 06:17:38 except SocketTimeout as e: 06:17:38 raise ConnectTimeoutError( 06:17:38 self, 06:17:38 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 06:17:38 ) from e 06:17:38 06:17:38 except OSError as e: 06:17:38 > raise NewConnectionError( 06:17:38 self, f"Failed to establish a new connection: {e}" 06:17:38 ) from e 06:17:38 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 06:17:38 06:17:38 The above exception was the direct cause of the following exception: 06:17:38 06:17:38 self = 06:17:38 request = , stream = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 06:17:38 proxies = OrderedDict() 06:17:38 06:17:38 def send( 06:17:38 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 06:17:38 ): 06:17:38 """Sends PreparedRequest object. Returns Response object. 06:17:38 06:17:38 :param request: The :class:`PreparedRequest ` being sent. 06:17:38 :param stream: (optional) Whether to stream the request content. 06:17:38 :param timeout: (optional) How long to wait for the server to send 06:17:38 data before giving up, as a float, or a :ref:`(connect timeout, 06:17:38 read timeout) ` tuple. 06:17:38 :type timeout: float or tuple or urllib3 Timeout object 06:17:38 :param verify: (optional) Either a boolean, in which case it controls whether 06:17:38 we verify the server's TLS certificate, or a string, in which case it 06:17:38 must be a path to a CA bundle to use 06:17:38 :param cert: (optional) Any user-provided SSL certificate to be trusted. 06:17:38 :param proxies: (optional) The proxies dictionary to apply to the request. 06:17:38 :rtype: requests.Response 06:17:38 """ 06:17:38 06:17:38 try: 06:17:38 conn = self.get_connection_with_tls_context( 06:17:38 request, verify, proxies=proxies, cert=cert 06:17:38 ) 06:17:38 except LocationValueError as e: 06:17:38 raise InvalidURL(e, request=request) 06:17:38 06:17:38 self.cert_verify(conn, request.url, verify, cert) 06:17:38 url = self.request_url(request, proxies) 06:17:38 self.add_headers( 06:17:38 request, 06:17:38 stream=stream, 06:17:38 timeout=timeout, 06:17:38 verify=verify, 06:17:38 cert=cert, 06:17:38 proxies=proxies, 06:17:38 ) 06:17:38 06:17:38 chunked = not (request.body is None or "Content-Length" in request.headers) 06:17:38 06:17:38 if isinstance(timeout, tuple): 06:17:38 try: 06:17:38 connect, read = timeout 06:17:38 timeout = TimeoutSauce(connect=connect, read=read) 06:17:38 except ValueError: 06:17:38 raise ValueError( 06:17:38 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 06:17:38 f"or a single float to set both timeouts to the same value." 06:17:38 ) 06:17:38 elif isinstance(timeout, TimeoutSauce): 06:17:38 pass 06:17:38 else: 06:17:38 timeout = TimeoutSauce(connect=timeout, read=timeout) 06:17:38 06:17:38 try: 06:17:38 > resp = conn.urlopen( 06:17:38 method=request.method, 06:17:38 url=url, 06:17:38 body=request.body, 06:17:38 headers=request.headers, 06:17:38 redirect=False, 06:17:38 assert_same_host=False, 06:17:38 preload_content=False, 06:17:38 decode_content=False, 06:17:38 retries=self.max_retries, 06:17:38 timeout=timeout, 06:17:38 chunked=chunked, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/adapters.py:667: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 06:17:38 retries = retries.increment( 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 06:17:38 method = 'PUT' 06:17:38 url = '/rests/data/open-terminal-meta-data:open-terminal-meta-data' 06:17:38 response = None 06:17:38 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 06:17:38 _pool = 06:17:38 _stacktrace = 06:17:38 06:17:38 def increment( 06:17:38 self, 06:17:38 method: str | None = None, 06:17:38 url: str | None = None, 06:17:38 response: BaseHTTPResponse | None = None, 06:17:38 error: Exception | None = None, 06:17:38 _pool: ConnectionPool | None = None, 06:17:38 _stacktrace: TracebackType | None = None, 06:17:38 ) -> Self: 06:17:38 """Return a new Retry object with incremented retry counters. 06:17:38 06:17:38 :param response: A response object, or None, if the server did not 06:17:38 return a response. 06:17:38 :type response: :class:`~urllib3.response.BaseHTTPResponse` 06:17:38 :param Exception error: An error encountered during the request, or 06:17:38 None if the response was received successfully. 06:17:38 06:17:38 :return: A new ``Retry`` object. 06:17:38 """ 06:17:38 if self.total is False and error: 06:17:38 # Disabled, indicate to re-raise the error. 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 06:17:38 total = self.total 06:17:38 if total is not None: 06:17:38 total -= 1 06:17:38 06:17:38 connect = self.connect 06:17:38 read = self.read 06:17:38 redirect = self.redirect 06:17:38 status_count = self.status 06:17:38 other = self.other 06:17:38 cause = "unknown" 06:17:38 status = None 06:17:38 redirect_location = None 06:17:38 06:17:38 if error and self._is_connection_error(error): 06:17:38 # Connect retry? 06:17:38 if connect is False: 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 elif connect is not None: 06:17:38 connect -= 1 06:17:38 06:17:38 elif error and self._is_read_error(error): 06:17:38 # Read retry? 06:17:38 if read is False or method is None or not self._is_method_retryable(method): 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 elif read is not None: 06:17:38 read -= 1 06:17:38 06:17:38 elif error: 06:17:38 # Other retry? 06:17:38 if other is not None: 06:17:38 other -= 1 06:17:38 06:17:38 elif response and response.get_redirect_location(): 06:17:38 # Redirect retry? 06:17:38 if redirect is not None: 06:17:38 redirect -= 1 06:17:38 cause = "too many redirects" 06:17:38 response_redirect_location = response.get_redirect_location() 06:17:38 if response_redirect_location: 06:17:38 redirect_location = response_redirect_location 06:17:38 status = response.status 06:17:38 06:17:38 else: 06:17:38 # Incrementing because of a server error like a 500 in 06:17:38 # status_forcelist and the given method is in the allowed_methods 06:17:38 cause = ResponseError.GENERIC_ERROR 06:17:38 if response and response.status: 06:17:38 if status_count is not None: 06:17:38 status_count -= 1 06:17:38 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 06:17:38 status = response.status 06:17:38 06:17:38 history = self.history + ( 06:17:38 RequestHistory(method, url, error, status, redirect_location), 06:17:38 ) 06:17:38 06:17:38 new_retry = self.new( 06:17:38 total=total, 06:17:38 connect=connect, 06:17:38 read=read, 06:17:38 redirect=redirect, 06:17:38 status=status_count, 06:17:38 other=other, 06:17:38 history=history, 06:17:38 ) 06:17:38 06:17:38 if new_retry.is_exhausted(): 06:17:38 reason = error or ResponseError(cause) 06:17:38 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8190): Max retries exceeded with url: /rests/data/open-terminal-meta-data:open-terminal-meta-data (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 06:17:38 06:17:38 During handling of the above exception, another exception occurred: 06:17:38 06:17:38 self = 06:17:38 06:17:38 def test_01_meta_data_insertion(self): 06:17:38 > response = test_utils_oc.metadata_input() 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 06:17:38 transportpce_tests/oc/test01_portmapping.py:46: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 transportpce_tests/common/test_utils_oc.py:163: in metadata_input 06:17:38 response = test_utils.put_request(url[test_utils.RESTCONF_VERSION], body) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 transportpce_tests/common/test_utils.py:124: in put_request 06:17:38 return requests.request( 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/api.py:59: in request 06:17:38 return session.request(method=method, url=url, **kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/sessions.py:589: in request 06:17:38 resp = self.send(prep, **send_kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/sessions.py:703: in send 06:17:38 r = adapter.send(request, **kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = 06:17:38 request = , stream = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 06:17:38 proxies = OrderedDict() 06:17:38 06:17:38 def send( 06:17:38 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 06:17:38 ): 06:17:38 """Sends PreparedRequest object. Returns Response object. 06:17:38 06:17:38 :param request: The :class:`PreparedRequest ` being sent. 06:17:38 :param stream: (optional) Whether to stream the request content. 06:17:38 :param timeout: (optional) How long to wait for the server to send 06:17:38 data before giving up, as a float, or a :ref:`(connect timeout, 06:17:38 read timeout) ` tuple. 06:17:38 :type timeout: float or tuple or urllib3 Timeout object 06:17:38 :param verify: (optional) Either a boolean, in which case it controls whether 06:17:38 we verify the server's TLS certificate, or a string, in which case it 06:17:38 must be a path to a CA bundle to use 06:17:38 :param cert: (optional) Any user-provided SSL certificate to be trusted. 06:17:38 :param proxies: (optional) The proxies dictionary to apply to the request. 06:17:38 :rtype: requests.Response 06:17:38 """ 06:17:38 06:17:38 try: 06:17:38 conn = self.get_connection_with_tls_context( 06:17:38 request, verify, proxies=proxies, cert=cert 06:17:38 ) 06:17:38 except LocationValueError as e: 06:17:38 raise InvalidURL(e, request=request) 06:17:38 06:17:38 self.cert_verify(conn, request.url, verify, cert) 06:17:38 url = self.request_url(request, proxies) 06:17:38 self.add_headers( 06:17:38 request, 06:17:38 stream=stream, 06:17:38 timeout=timeout, 06:17:38 verify=verify, 06:17:38 cert=cert, 06:17:38 proxies=proxies, 06:17:38 ) 06:17:38 06:17:38 chunked = not (request.body is None or "Content-Length" in request.headers) 06:17:38 06:17:38 if isinstance(timeout, tuple): 06:17:38 try: 06:17:38 connect, read = timeout 06:17:38 timeout = TimeoutSauce(connect=connect, read=read) 06:17:38 except ValueError: 06:17:38 raise ValueError( 06:17:38 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 06:17:38 f"or a single float to set both timeouts to the same value." 06:17:38 ) 06:17:38 elif isinstance(timeout, TimeoutSauce): 06:17:38 pass 06:17:38 else: 06:17:38 timeout = TimeoutSauce(connect=timeout, read=timeout) 06:17:38 06:17:38 try: 06:17:38 resp = conn.urlopen( 06:17:38 method=request.method, 06:17:38 url=url, 06:17:38 body=request.body, 06:17:38 headers=request.headers, 06:17:38 redirect=False, 06:17:38 assert_same_host=False, 06:17:38 preload_content=False, 06:17:38 decode_content=False, 06:17:38 retries=self.max_retries, 06:17:38 timeout=timeout, 06:17:38 chunked=chunked, 06:17:38 ) 06:17:38 06:17:38 except (ProtocolError, OSError) as err: 06:17:38 raise ConnectionError(err, request=request) 06:17:38 06:17:38 except MaxRetryError as e: 06:17:38 if isinstance(e.reason, ConnectTimeoutError): 06:17:38 # TODO: Remove this in 3.0.0: see #2811 06:17:38 if not isinstance(e.reason, NewConnectionError): 06:17:38 raise ConnectTimeout(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, ResponseError): 06:17:38 raise RetryError(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, _ProxyError): 06:17:38 raise ProxyError(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, _SSLError): 06:17:38 # This branch is for urllib3 v1.22 and later. 06:17:38 raise SSLError(e, request=request) 06:17:38 06:17:38 > raise ConnectionError(e, request=request) 06:17:38 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8190): Max retries exceeded with url: /rests/data/open-terminal-meta-data:open-terminal-meta-data (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 06:17:38 ---------------------------- Captured stdout setup ----------------------------- 06:17:38 starting OpenDaylight... 06:17:38 starting KARAF TransportPCE build... 06:17:38 Searching for patterns in karaf.log... Pattern found! OpenDaylight started ! 06:17:38 starting simulator oc-mpdr in OpenROADM device version oc... 06:17:38 Searching for patterns in sample-openconfig-mpdr.log... Pattern found! simulator for oc-mpdr started 06:17:38 ----------------------------- Captured stdout call ----------------------------- 06:17:38 execution of test_01_meta_data_insertion 06:17:38 _______ TransportpceOCPortMappingTesting.test_02_catlog_input_insertion ________ 06:17:38 06:17:38 self = 06:17:38 06:17:38 def _new_conn(self) -> socket.socket: 06:17:38 """Establish a socket connection and set nodelay settings on it. 06:17:38 06:17:38 :return: New socket connection. 06:17:38 """ 06:17:38 try: 06:17:38 > sock = connection.create_connection( 06:17:38 (self._dns_host, self.port), 06:17:38 self.timeout, 06:17:38 source_address=self.source_address, 06:17:38 socket_options=self.socket_options, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:198: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 06:17:38 raise err 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 address = ('localhost', 8190), timeout = 30, source_address = None 06:17:38 socket_options = [(6, 1, 1)] 06:17:38 06:17:38 def create_connection( 06:17:38 address: tuple[str, int], 06:17:38 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 06:17:38 source_address: tuple[str, int] | None = None, 06:17:38 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 06:17:38 ) -> socket.socket: 06:17:38 """Connect to *address* and return the socket object. 06:17:38 06:17:38 Convenience function. Connect to *address* (a 2-tuple ``(host, 06:17:38 port)``) and return the socket object. Passing the optional 06:17:38 *timeout* parameter will set the timeout on the socket instance 06:17:38 before attempting to connect. If no *timeout* is supplied, the 06:17:38 global default timeout setting returned by :func:`socket.getdefaulttimeout` 06:17:38 is used. If *source_address* is set it must be a tuple of (host, port) 06:17:38 for the socket to bind as a source address before making the connection. 06:17:38 An host of '' or port 0 tells the OS to use the default. 06:17:38 """ 06:17:38 06:17:38 host, port = address 06:17:38 if host.startswith("["): 06:17:38 host = host.strip("[]") 06:17:38 err = None 06:17:38 06:17:38 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 06:17:38 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 06:17:38 # The original create_connection function always returns all records. 06:17:38 family = allowed_gai_family() 06:17:38 06:17:38 try: 06:17:38 host.encode("idna") 06:17:38 except UnicodeError: 06:17:38 raise LocationParseError(f"'{host}', label empty or too long") from None 06:17:38 06:17:38 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 06:17:38 af, socktype, proto, canonname, sa = res 06:17:38 sock = None 06:17:38 try: 06:17:38 sock = socket.socket(af, socktype, proto) 06:17:38 06:17:38 # If provided, set socket level options before connecting. 06:17:38 _set_socket_options(sock, socket_options) 06:17:38 06:17:38 if timeout is not _DEFAULT_TIMEOUT: 06:17:38 sock.settimeout(timeout) 06:17:38 if source_address: 06:17:38 sock.bind(source_address) 06:17:38 > sock.connect(sa) 06:17:38 E ConnectionRefusedError: [Errno 111] Connection refused 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 06:17:38 06:17:38 The above exception was the direct cause of the following exception: 06:17:38 06:17:38 self = 06:17:38 method = 'POST' 06:17:38 url = '/rests/operations/org-openroadm-service:add-specific-operational-modes-to-catalog' 06:17:38 body = '{"input": {"sdnc-request-header": {"request-id": "load-specific-OM-Catalog", "rpc-action": "fill-catalog-with-specifi...and-unit": "colorless-drop-adjacent-channel-crosstalk-GHz", "up-to-boundary": "4.10", "penalty-value": "0.500"}]}]}}}}' 06:17:38 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '2308', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 06:17:38 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 06:17:38 redirect = False, assert_same_host = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 06:17:38 release_conn = False, chunked = False, body_pos = None, preload_content = False 06:17:38 decode_content = False, response_kw = {} 06:17:38 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/org-openroadm-service:add-specific-operational-modes-to-catalog', query=None, fragment=None) 06:17:38 destination_scheme = None, conn = None, release_this_conn = True 06:17:38 http_tunnel_required = False, err = None, clean_exit = False 06:17:38 06:17:38 def urlopen( # type: ignore[override] 06:17:38 self, 06:17:38 method: str, 06:17:38 url: str, 06:17:38 body: _TYPE_BODY | None = None, 06:17:38 headers: typing.Mapping[str, str] | None = None, 06:17:38 retries: Retry | bool | int | None = None, 06:17:38 redirect: bool = True, 06:17:38 assert_same_host: bool = True, 06:17:38 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 06:17:38 pool_timeout: int | None = None, 06:17:38 release_conn: bool | None = None, 06:17:38 chunked: bool = False, 06:17:38 body_pos: _TYPE_BODY_POSITION | None = None, 06:17:38 preload_content: bool = True, 06:17:38 decode_content: bool = True, 06:17:38 **response_kw: typing.Any, 06:17:38 ) -> BaseHTTPResponse: 06:17:38 """ 06:17:38 Get a connection from the pool and perform an HTTP request. This is the 06:17:38 lowest level call for making a request, so you'll need to specify all 06:17:38 the raw details. 06:17:38 06:17:38 .. note:: 06:17:38 06:17:38 More commonly, it's appropriate to use a convenience method 06:17:38 such as :meth:`request`. 06:17:38 06:17:38 .. note:: 06:17:38 06:17:38 `release_conn` will only behave as expected if 06:17:38 `preload_content=False` because we want to make 06:17:38 `preload_content=False` the default behaviour someday soon without 06:17:38 breaking backwards compatibility. 06:17:38 06:17:38 :param method: 06:17:38 HTTP request method (such as GET, POST, PUT, etc.) 06:17:38 06:17:38 :param url: 06:17:38 The URL to perform the request on. 06:17:38 06:17:38 :param body: 06:17:38 Data to send in the request body, either :class:`str`, :class:`bytes`, 06:17:38 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 06:17:38 06:17:38 :param headers: 06:17:38 Dictionary of custom headers to send, such as User-Agent, 06:17:38 If-None-Match, etc. If None, pool headers are used. If provided, 06:17:38 these headers completely replace any pool-specific headers. 06:17:38 06:17:38 :param retries: 06:17:38 Configure the number of retries to allow before raising a 06:17:38 :class:`~urllib3.exceptions.MaxRetryError` exception. 06:17:38 06:17:38 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 06:17:38 :class:`~urllib3.util.retry.Retry` object for fine-grained control 06:17:38 over different types of retries. 06:17:38 Pass an integer number to retry connection errors that many times, 06:17:38 but no other types of errors. Pass zero to never retry. 06:17:38 06:17:38 If ``False``, then retries are disabled and any exception is raised 06:17:38 immediately. Also, instead of raising a MaxRetryError on redirects, 06:17:38 the redirect response will be returned. 06:17:38 06:17:38 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 06:17:38 06:17:38 :param redirect: 06:17:38 If True, automatically handle redirects (status codes 301, 302, 06:17:38 303, 307, 308). Each redirect counts as a retry. Disabling retries 06:17:38 will disable redirect, too. 06:17:38 06:17:38 :param assert_same_host: 06:17:38 If ``True``, will make sure that the host of the pool requests is 06:17:38 consistent else will raise HostChangedError. When ``False``, you can 06:17:38 use the pool on an HTTP proxy and request foreign hosts. 06:17:38 06:17:38 :param timeout: 06:17:38 If specified, overrides the default timeout for this one 06:17:38 request. It may be a float (in seconds) or an instance of 06:17:38 :class:`urllib3.util.Timeout`. 06:17:38 06:17:38 :param pool_timeout: 06:17:38 If set and the pool is set to block=True, then this method will 06:17:38 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 06:17:38 connection is available within the time period. 06:17:38 06:17:38 :param bool preload_content: 06:17:38 If True, the response's body will be preloaded into memory. 06:17:38 06:17:38 :param bool decode_content: 06:17:38 If True, will attempt to decode the body based on the 06:17:38 'content-encoding' header. 06:17:38 06:17:38 :param release_conn: 06:17:38 If False, then the urlopen call will not release the connection 06:17:38 back into the pool once a response is received (but will release if 06:17:38 you read the entire contents of the response such as when 06:17:38 `preload_content=True`). This is useful if you're not preloading 06:17:38 the response's content immediately. You will need to call 06:17:38 ``r.release_conn()`` on the response ``r`` to return the connection 06:17:38 back into the pool. If None, it takes the value of ``preload_content`` 06:17:38 which defaults to ``True``. 06:17:38 06:17:38 :param bool chunked: 06:17:38 If True, urllib3 will send the body using chunked transfer 06:17:38 encoding. Otherwise, urllib3 will send the body using the standard 06:17:38 content-length form. Defaults to False. 06:17:38 06:17:38 :param int body_pos: 06:17:38 Position to seek to in file-like body in the event of a retry or 06:17:38 redirect. Typically this won't need to be set because urllib3 will 06:17:38 auto-populate the value when needed. 06:17:38 """ 06:17:38 parsed_url = parse_url(url) 06:17:38 destination_scheme = parsed_url.scheme 06:17:38 06:17:38 if headers is None: 06:17:38 headers = self.headers 06:17:38 06:17:38 if not isinstance(retries, Retry): 06:17:38 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 06:17:38 06:17:38 if release_conn is None: 06:17:38 release_conn = preload_content 06:17:38 06:17:38 # Check host 06:17:38 if assert_same_host and not self.is_same_host(url): 06:17:38 raise HostChangedError(self, url, retries) 06:17:38 06:17:38 # Ensure that the URL we're connecting to is properly encoded 06:17:38 if url.startswith("/"): 06:17:38 url = to_str(_encode_target(url)) 06:17:38 else: 06:17:38 url = to_str(parsed_url.url) 06:17:38 06:17:38 conn = None 06:17:38 06:17:38 # Track whether `conn` needs to be released before 06:17:38 # returning/raising/recursing. Update this variable if necessary, and 06:17:38 # leave `release_conn` constant throughout the function. That way, if 06:17:38 # the function recurses, the original value of `release_conn` will be 06:17:38 # passed down into the recursive call, and its value will be respected. 06:17:38 # 06:17:38 # See issue #651 [1] for details. 06:17:38 # 06:17:38 # [1] 06:17:38 release_this_conn = release_conn 06:17:38 06:17:38 http_tunnel_required = connection_requires_http_tunnel( 06:17:38 self.proxy, self.proxy_config, destination_scheme 06:17:38 ) 06:17:38 06:17:38 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 06:17:38 # have to copy the headers dict so we can safely change it without those 06:17:38 # changes being reflected in anyone else's copy. 06:17:38 if not http_tunnel_required: 06:17:38 headers = headers.copy() # type: ignore[attr-defined] 06:17:38 headers.update(self.proxy_headers) # type: ignore[union-attr] 06:17:38 06:17:38 # Must keep the exception bound to a separate variable or else Python 3 06:17:38 # complains about UnboundLocalError. 06:17:38 err = None 06:17:38 06:17:38 # Keep track of whether we cleanly exited the except block. This 06:17:38 # ensures we do proper cleanup in finally. 06:17:38 clean_exit = False 06:17:38 06:17:38 # Rewind body position, if needed. Record current position 06:17:38 # for future rewinds in the event of a redirect/retry. 06:17:38 body_pos = set_file_position(body, body_pos) 06:17:38 06:17:38 try: 06:17:38 # Request a connection from the queue. 06:17:38 timeout_obj = self._get_timeout(timeout) 06:17:38 conn = self._get_conn(timeout=pool_timeout) 06:17:38 06:17:38 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 06:17:38 06:17:38 # Is this a closed/new connection that requires CONNECT tunnelling? 06:17:38 if self.proxy is not None and http_tunnel_required and conn.is_closed: 06:17:38 try: 06:17:38 self._prepare_proxy(conn) 06:17:38 except (BaseSSLError, OSError, SocketTimeout) as e: 06:17:38 self._raise_timeout( 06:17:38 err=e, url=self.proxy.url, timeout_value=conn.timeout 06:17:38 ) 06:17:38 raise 06:17:38 06:17:38 # If we're going to release the connection in ``finally:``, then 06:17:38 # the response doesn't need to know about the connection. Otherwise 06:17:38 # it will also try to release it and we'll have a double-release 06:17:38 # mess. 06:17:38 response_conn = conn if not release_conn else None 06:17:38 06:17:38 # Make the request on the HTTPConnection object 06:17:38 > response = self._make_request( 06:17:38 conn, 06:17:38 method, 06:17:38 url, 06:17:38 timeout=timeout_obj, 06:17:38 body=body, 06:17:38 headers=headers, 06:17:38 chunked=chunked, 06:17:38 retries=retries, 06:17:38 response_conn=response_conn, 06:17:38 preload_content=preload_content, 06:17:38 decode_content=decode_content, 06:17:38 **response_kw, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 06:17:38 conn.request( 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:494: in request 06:17:38 self.endheaders() 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 06:17:38 self._send_output(message_body, encode_chunked=encode_chunked) 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 06:17:38 self.send(msg) 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 06:17:38 self.connect() 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 06:17:38 self.sock = self._new_conn() 06:17:38 ^^^^^^^^^^^^^^^^ 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = 06:17:38 06:17:38 def _new_conn(self) -> socket.socket: 06:17:38 """Establish a socket connection and set nodelay settings on it. 06:17:38 06:17:38 :return: New socket connection. 06:17:38 """ 06:17:38 try: 06:17:38 sock = connection.create_connection( 06:17:38 (self._dns_host, self.port), 06:17:38 self.timeout, 06:17:38 source_address=self.source_address, 06:17:38 socket_options=self.socket_options, 06:17:38 ) 06:17:38 except socket.gaierror as e: 06:17:38 raise NameResolutionError(self.host, self, e) from e 06:17:38 except SocketTimeout as e: 06:17:38 raise ConnectTimeoutError( 06:17:38 self, 06:17:38 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 06:17:38 ) from e 06:17:38 06:17:38 except OSError as e: 06:17:38 > raise NewConnectionError( 06:17:38 self, f"Failed to establish a new connection: {e}" 06:17:38 ) from e 06:17:38 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 06:17:38 06:17:38 The above exception was the direct cause of the following exception: 06:17:38 06:17:38 self = 06:17:38 request = , stream = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 06:17:38 proxies = OrderedDict() 06:17:38 06:17:38 def send( 06:17:38 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 06:17:38 ): 06:17:38 """Sends PreparedRequest object. Returns Response object. 06:17:38 06:17:38 :param request: The :class:`PreparedRequest ` being sent. 06:17:38 :param stream: (optional) Whether to stream the request content. 06:17:38 :param timeout: (optional) How long to wait for the server to send 06:17:38 data before giving up, as a float, or a :ref:`(connect timeout, 06:17:38 read timeout) ` tuple. 06:17:38 :type timeout: float or tuple or urllib3 Timeout object 06:17:38 :param verify: (optional) Either a boolean, in which case it controls whether 06:17:38 we verify the server's TLS certificate, or a string, in which case it 06:17:38 must be a path to a CA bundle to use 06:17:38 :param cert: (optional) Any user-provided SSL certificate to be trusted. 06:17:38 :param proxies: (optional) The proxies dictionary to apply to the request. 06:17:38 :rtype: requests.Response 06:17:38 """ 06:17:38 06:17:38 try: 06:17:38 conn = self.get_connection_with_tls_context( 06:17:38 request, verify, proxies=proxies, cert=cert 06:17:38 ) 06:17:38 except LocationValueError as e: 06:17:38 raise InvalidURL(e, request=request) 06:17:38 06:17:38 self.cert_verify(conn, request.url, verify, cert) 06:17:38 url = self.request_url(request, proxies) 06:17:38 self.add_headers( 06:17:38 request, 06:17:38 stream=stream, 06:17:38 timeout=timeout, 06:17:38 verify=verify, 06:17:38 cert=cert, 06:17:38 proxies=proxies, 06:17:38 ) 06:17:38 06:17:38 chunked = not (request.body is None or "Content-Length" in request.headers) 06:17:38 06:17:38 if isinstance(timeout, tuple): 06:17:38 try: 06:17:38 connect, read = timeout 06:17:38 timeout = TimeoutSauce(connect=connect, read=read) 06:17:38 except ValueError: 06:17:38 raise ValueError( 06:17:38 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 06:17:38 f"or a single float to set both timeouts to the same value." 06:17:38 ) 06:17:38 elif isinstance(timeout, TimeoutSauce): 06:17:38 pass 06:17:38 else: 06:17:38 timeout = TimeoutSauce(connect=timeout, read=timeout) 06:17:38 06:17:38 try: 06:17:38 > resp = conn.urlopen( 06:17:38 method=request.method, 06:17:38 url=url, 06:17:38 body=request.body, 06:17:38 headers=request.headers, 06:17:38 redirect=False, 06:17:38 assert_same_host=False, 06:17:38 preload_content=False, 06:17:38 decode_content=False, 06:17:38 retries=self.max_retries, 06:17:38 timeout=timeout, 06:17:38 chunked=chunked, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/adapters.py:667: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 06:17:38 retries = retries.increment( 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 06:17:38 method = 'POST' 06:17:38 url = '/rests/operations/org-openroadm-service:add-specific-operational-modes-to-catalog' 06:17:38 response = None 06:17:38 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 06:17:38 _pool = 06:17:38 _stacktrace = 06:17:38 06:17:38 def increment( 06:17:38 self, 06:17:38 method: str | None = None, 06:17:38 url: str | None = None, 06:17:38 response: BaseHTTPResponse | None = None, 06:17:38 error: Exception | None = None, 06:17:38 _pool: ConnectionPool | None = None, 06:17:38 _stacktrace: TracebackType | None = None, 06:17:38 ) -> Self: 06:17:38 """Return a new Retry object with incremented retry counters. 06:17:38 06:17:38 :param response: A response object, or None, if the server did not 06:17:38 return a response. 06:17:38 :type response: :class:`~urllib3.response.BaseHTTPResponse` 06:17:38 :param Exception error: An error encountered during the request, or 06:17:38 None if the response was received successfully. 06:17:38 06:17:38 :return: A new ``Retry`` object. 06:17:38 """ 06:17:38 if self.total is False and error: 06:17:38 # Disabled, indicate to re-raise the error. 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 06:17:38 total = self.total 06:17:38 if total is not None: 06:17:38 total -= 1 06:17:38 06:17:38 connect = self.connect 06:17:38 read = self.read 06:17:38 redirect = self.redirect 06:17:38 status_count = self.status 06:17:38 other = self.other 06:17:38 cause = "unknown" 06:17:38 status = None 06:17:38 redirect_location = None 06:17:38 06:17:38 if error and self._is_connection_error(error): 06:17:38 # Connect retry? 06:17:38 if connect is False: 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 elif connect is not None: 06:17:38 connect -= 1 06:17:38 06:17:38 elif error and self._is_read_error(error): 06:17:38 # Read retry? 06:17:38 if read is False or method is None or not self._is_method_retryable(method): 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 elif read is not None: 06:17:38 read -= 1 06:17:38 06:17:38 elif error: 06:17:38 # Other retry? 06:17:38 if other is not None: 06:17:38 other -= 1 06:17:38 06:17:38 elif response and response.get_redirect_location(): 06:17:38 # Redirect retry? 06:17:38 if redirect is not None: 06:17:38 redirect -= 1 06:17:38 cause = "too many redirects" 06:17:38 response_redirect_location = response.get_redirect_location() 06:17:38 if response_redirect_location: 06:17:38 redirect_location = response_redirect_location 06:17:38 status = response.status 06:17:38 06:17:38 else: 06:17:38 # Incrementing because of a server error like a 500 in 06:17:38 # status_forcelist and the given method is in the allowed_methods 06:17:38 cause = ResponseError.GENERIC_ERROR 06:17:38 if response and response.status: 06:17:38 if status_count is not None: 06:17:38 status_count -= 1 06:17:38 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 06:17:38 status = response.status 06:17:38 06:17:38 history = self.history + ( 06:17:38 RequestHistory(method, url, error, status, redirect_location), 06:17:38 ) 06:17:38 06:17:38 new_retry = self.new( 06:17:38 total=total, 06:17:38 connect=connect, 06:17:38 read=read, 06:17:38 redirect=redirect, 06:17:38 status=status_count, 06:17:38 other=other, 06:17:38 history=history, 06:17:38 ) 06:17:38 06:17:38 if new_retry.is_exhausted(): 06:17:38 reason = error or ResponseError(cause) 06:17:38 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8190): Max retries exceeded with url: /rests/operations/org-openroadm-service:add-specific-operational-modes-to-catalog (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 06:17:38 06:17:38 During handling of the above exception, another exception occurred: 06:17:38 06:17:38 self = 06:17:38 06:17:38 def test_02_catlog_input_insertion(self): 06:17:38 > response = test_utils_oc.catlog_input() 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 06:17:38 transportpce_tests/oc/test01_portmapping.py:51: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 transportpce_tests/common/test_utils_oc.py:280: in catlog_input 06:17:38 response = test_utils.post_request(url[test_utils.RESTCONF_VERSION], body) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 transportpce_tests/common/test_utils.py:142: in post_request 06:17:38 return requests.request( 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/api.py:59: in request 06:17:38 return session.request(method=method, url=url, **kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/sessions.py:589: in request 06:17:38 resp = self.send(prep, **send_kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/sessions.py:703: in send 06:17:38 r = adapter.send(request, **kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = 06:17:38 request = , stream = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 06:17:38 proxies = OrderedDict() 06:17:38 06:17:38 def send( 06:17:38 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 06:17:38 ): 06:17:38 """Sends PreparedRequest object. Returns Response object. 06:17:38 06:17:38 :param request: The :class:`PreparedRequest ` being sent. 06:17:38 :param stream: (optional) Whether to stream the request content. 06:17:38 :param timeout: (optional) How long to wait for the server to send 06:17:38 data before giving up, as a float, or a :ref:`(connect timeout, 06:17:38 read timeout) ` tuple. 06:17:38 :type timeout: float or tuple or urllib3 Timeout object 06:17:38 :param verify: (optional) Either a boolean, in which case it controls whether 06:17:38 we verify the server's TLS certificate, or a string, in which case it 06:17:38 must be a path to a CA bundle to use 06:17:38 :param cert: (optional) Any user-provided SSL certificate to be trusted. 06:17:38 :param proxies: (optional) The proxies dictionary to apply to the request. 06:17:38 :rtype: requests.Response 06:17:38 """ 06:17:38 06:17:38 try: 06:17:38 conn = self.get_connection_with_tls_context( 06:17:38 request, verify, proxies=proxies, cert=cert 06:17:38 ) 06:17:38 except LocationValueError as e: 06:17:38 raise InvalidURL(e, request=request) 06:17:38 06:17:38 self.cert_verify(conn, request.url, verify, cert) 06:17:38 url = self.request_url(request, proxies) 06:17:38 self.add_headers( 06:17:38 request, 06:17:38 stream=stream, 06:17:38 timeout=timeout, 06:17:38 verify=verify, 06:17:38 cert=cert, 06:17:38 proxies=proxies, 06:17:38 ) 06:17:38 06:17:38 chunked = not (request.body is None or "Content-Length" in request.headers) 06:17:38 06:17:38 if isinstance(timeout, tuple): 06:17:38 try: 06:17:38 connect, read = timeout 06:17:38 timeout = TimeoutSauce(connect=connect, read=read) 06:17:38 except ValueError: 06:17:38 raise ValueError( 06:17:38 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 06:17:38 f"or a single float to set both timeouts to the same value." 06:17:38 ) 06:17:38 elif isinstance(timeout, TimeoutSauce): 06:17:38 pass 06:17:38 else: 06:17:38 timeout = TimeoutSauce(connect=timeout, read=timeout) 06:17:38 06:17:38 try: 06:17:38 resp = conn.urlopen( 06:17:38 method=request.method, 06:17:38 url=url, 06:17:38 body=request.body, 06:17:38 headers=request.headers, 06:17:38 redirect=False, 06:17:38 assert_same_host=False, 06:17:38 preload_content=False, 06:17:38 decode_content=False, 06:17:38 retries=self.max_retries, 06:17:38 timeout=timeout, 06:17:38 chunked=chunked, 06:17:38 ) 06:17:38 06:17:38 except (ProtocolError, OSError) as err: 06:17:38 raise ConnectionError(err, request=request) 06:17:38 06:17:38 except MaxRetryError as e: 06:17:38 if isinstance(e.reason, ConnectTimeoutError): 06:17:38 # TODO: Remove this in 3.0.0: see #2811 06:17:38 if not isinstance(e.reason, NewConnectionError): 06:17:38 raise ConnectTimeout(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, ResponseError): 06:17:38 raise RetryError(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, _ProxyError): 06:17:38 raise ProxyError(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, _SSLError): 06:17:38 # This branch is for urllib3 v1.22 and later. 06:17:38 raise SSLError(e, request=request) 06:17:38 06:17:38 > raise ConnectionError(e, request=request) 06:17:38 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8190): Max retries exceeded with url: /rests/operations/org-openroadm-service:add-specific-operational-modes-to-catalog (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 06:17:38 ----------------------------- Captured stdout call ----------------------------- 06:17:38 execution of test_02_catlog_input_insertion 06:17:38 _______ TransportpceOCPortMappingTesting.test_03_xpdr_device_connection ________ 06:17:38 06:17:38 self = 06:17:38 06:17:38 def _new_conn(self) -> socket.socket: 06:17:38 """Establish a socket connection and set nodelay settings on it. 06:17:38 06:17:38 :return: New socket connection. 06:17:38 """ 06:17:38 try: 06:17:38 > sock = connection.create_connection( 06:17:38 (self._dns_host, self.port), 06:17:38 self.timeout, 06:17:38 source_address=self.source_address, 06:17:38 socket_options=self.socket_options, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:198: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 06:17:38 raise err 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 address = ('localhost', 8190), timeout = 30, source_address = None 06:17:38 socket_options = [(6, 1, 1)] 06:17:38 06:17:38 def create_connection( 06:17:38 address: tuple[str, int], 06:17:38 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 06:17:38 source_address: tuple[str, int] | None = None, 06:17:38 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 06:17:38 ) -> socket.socket: 06:17:38 """Connect to *address* and return the socket object. 06:17:38 06:17:38 Convenience function. Connect to *address* (a 2-tuple ``(host, 06:17:38 port)``) and return the socket object. Passing the optional 06:17:38 *timeout* parameter will set the timeout on the socket instance 06:17:38 before attempting to connect. If no *timeout* is supplied, the 06:17:38 global default timeout setting returned by :func:`socket.getdefaulttimeout` 06:17:38 is used. If *source_address* is set it must be a tuple of (host, port) 06:17:38 for the socket to bind as a source address before making the connection. 06:17:38 An host of '' or port 0 tells the OS to use the default. 06:17:38 """ 06:17:38 06:17:38 host, port = address 06:17:38 if host.startswith("["): 06:17:38 host = host.strip("[]") 06:17:38 err = None 06:17:38 06:17:38 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 06:17:38 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 06:17:38 # The original create_connection function always returns all records. 06:17:38 family = allowed_gai_family() 06:17:38 06:17:38 try: 06:17:38 host.encode("idna") 06:17:38 except UnicodeError: 06:17:38 raise LocationParseError(f"'{host}', label empty or too long") from None 06:17:38 06:17:38 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 06:17:38 af, socktype, proto, canonname, sa = res 06:17:38 sock = None 06:17:38 try: 06:17:38 sock = socket.socket(af, socktype, proto) 06:17:38 06:17:38 # If provided, set socket level options before connecting. 06:17:38 _set_socket_options(sock, socket_options) 06:17:38 06:17:38 if timeout is not _DEFAULT_TIMEOUT: 06:17:38 sock.settimeout(timeout) 06:17:38 if source_address: 06:17:38 sock.bind(source_address) 06:17:38 > sock.connect(sa) 06:17:38 E ConnectionRefusedError: [Errno 111] Connection refused 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 06:17:38 06:17:38 The above exception was the direct cause of the following exception: 06:17:38 06:17:38 self = 06:17:38 method = 'PUT' 06:17:38 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-OC' 06:17:38 body = '{"node": [{"node-id": "XPDR-OC", "netconf-node-topology:netconf-node": {"netconf-node-topology:host": "127.0.0.1", "n...ff-millis": 1800000, "netconf-node-topology:backoff-multiplier": 1.5, "netconf-node-topology:keepalive-delay": 120}}]}' 06:17:38 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '709', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 06:17:38 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 06:17:38 redirect = False, assert_same_host = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 06:17:38 release_conn = False, chunked = False, body_pos = None, preload_content = False 06:17:38 decode_content = False, response_kw = {} 06:17:38 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-OC', query=None, fragment=None) 06:17:38 destination_scheme = None, conn = None, release_this_conn = True 06:17:38 http_tunnel_required = False, err = None, clean_exit = False 06:17:38 06:17:38 def urlopen( # type: ignore[override] 06:17:38 self, 06:17:38 method: str, 06:17:38 url: str, 06:17:38 body: _TYPE_BODY | None = None, 06:17:38 headers: typing.Mapping[str, str] | None = None, 06:17:38 retries: Retry | bool | int | None = None, 06:17:38 redirect: bool = True, 06:17:38 assert_same_host: bool = True, 06:17:38 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 06:17:38 pool_timeout: int | None = None, 06:17:38 release_conn: bool | None = None, 06:17:38 chunked: bool = False, 06:17:38 body_pos: _TYPE_BODY_POSITION | None = None, 06:17:38 preload_content: bool = True, 06:17:38 decode_content: bool = True, 06:17:38 **response_kw: typing.Any, 06:17:38 ) -> BaseHTTPResponse: 06:17:38 """ 06:17:38 Get a connection from the pool and perform an HTTP request. This is the 06:17:38 lowest level call for making a request, so you'll need to specify all 06:17:38 the raw details. 06:17:38 06:17:38 .. note:: 06:17:38 06:17:38 More commonly, it's appropriate to use a convenience method 06:17:38 such as :meth:`request`. 06:17:38 06:17:38 .. note:: 06:17:38 06:17:38 `release_conn` will only behave as expected if 06:17:38 `preload_content=False` because we want to make 06:17:38 `preload_content=False` the default behaviour someday soon without 06:17:38 breaking backwards compatibility. 06:17:38 06:17:38 :param method: 06:17:38 HTTP request method (such as GET, POST, PUT, etc.) 06:17:38 06:17:38 :param url: 06:17:38 The URL to perform the request on. 06:17:38 06:17:38 :param body: 06:17:38 Data to send in the request body, either :class:`str`, :class:`bytes`, 06:17:38 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 06:17:38 06:17:38 :param headers: 06:17:38 Dictionary of custom headers to send, such as User-Agent, 06:17:38 If-None-Match, etc. If None, pool headers are used. If provided, 06:17:38 these headers completely replace any pool-specific headers. 06:17:38 06:17:38 :param retries: 06:17:38 Configure the number of retries to allow before raising a 06:17:38 :class:`~urllib3.exceptions.MaxRetryError` exception. 06:17:38 06:17:38 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 06:17:38 :class:`~urllib3.util.retry.Retry` object for fine-grained control 06:17:38 over different types of retries. 06:17:38 Pass an integer number to retry connection errors that many times, 06:17:38 but no other types of errors. Pass zero to never retry. 06:17:38 06:17:38 If ``False``, then retries are disabled and any exception is raised 06:17:38 immediately. Also, instead of raising a MaxRetryError on redirects, 06:17:38 the redirect response will be returned. 06:17:38 06:17:38 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 06:17:38 06:17:38 :param redirect: 06:17:38 If True, automatically handle redirects (status codes 301, 302, 06:17:38 303, 307, 308). Each redirect counts as a retry. Disabling retries 06:17:38 will disable redirect, too. 06:17:38 06:17:38 :param assert_same_host: 06:17:38 If ``True``, will make sure that the host of the pool requests is 06:17:38 consistent else will raise HostChangedError. When ``False``, you can 06:17:38 use the pool on an HTTP proxy and request foreign hosts. 06:17:38 06:17:38 :param timeout: 06:17:38 If specified, overrides the default timeout for this one 06:17:38 request. It may be a float (in seconds) or an instance of 06:17:38 :class:`urllib3.util.Timeout`. 06:17:38 06:17:38 :param pool_timeout: 06:17:38 If set and the pool is set to block=True, then this method will 06:17:38 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 06:17:38 connection is available within the time period. 06:17:38 06:17:38 :param bool preload_content: 06:17:38 If True, the response's body will be preloaded into memory. 06:17:38 06:17:38 :param bool decode_content: 06:17:38 If True, will attempt to decode the body based on the 06:17:38 'content-encoding' header. 06:17:38 06:17:38 :param release_conn: 06:17:38 If False, then the urlopen call will not release the connection 06:17:38 back into the pool once a response is received (but will release if 06:17:38 you read the entire contents of the response such as when 06:17:38 `preload_content=True`). This is useful if you're not preloading 06:17:38 the response's content immediately. You will need to call 06:17:38 ``r.release_conn()`` on the response ``r`` to return the connection 06:17:38 back into the pool. If None, it takes the value of ``preload_content`` 06:17:38 which defaults to ``True``. 06:17:38 06:17:38 :param bool chunked: 06:17:38 If True, urllib3 will send the body using chunked transfer 06:17:38 encoding. Otherwise, urllib3 will send the body using the standard 06:17:38 content-length form. Defaults to False. 06:17:38 06:17:38 :param int body_pos: 06:17:38 Position to seek to in file-like body in the event of a retry or 06:17:38 redirect. Typically this won't need to be set because urllib3 will 06:17:38 auto-populate the value when needed. 06:17:38 """ 06:17:38 parsed_url = parse_url(url) 06:17:38 destination_scheme = parsed_url.scheme 06:17:38 06:17:38 if headers is None: 06:17:38 headers = self.headers 06:17:38 06:17:38 if not isinstance(retries, Retry): 06:17:38 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 06:17:38 06:17:38 if release_conn is None: 06:17:38 release_conn = preload_content 06:17:38 06:17:38 # Check host 06:17:38 if assert_same_host and not self.is_same_host(url): 06:17:38 raise HostChangedError(self, url, retries) 06:17:38 06:17:38 # Ensure that the URL we're connecting to is properly encoded 06:17:38 if url.startswith("/"): 06:17:38 url = to_str(_encode_target(url)) 06:17:38 else: 06:17:38 url = to_str(parsed_url.url) 06:17:38 06:17:38 conn = None 06:17:38 06:17:38 # Track whether `conn` needs to be released before 06:17:38 # returning/raising/recursing. Update this variable if necessary, and 06:17:38 # leave `release_conn` constant throughout the function. That way, if 06:17:38 # the function recurses, the original value of `release_conn` will be 06:17:38 # passed down into the recursive call, and its value will be respected. 06:17:38 # 06:17:38 # See issue #651 [1] for details. 06:17:38 # 06:17:38 # [1] 06:17:38 release_this_conn = release_conn 06:17:38 06:17:38 http_tunnel_required = connection_requires_http_tunnel( 06:17:38 self.proxy, self.proxy_config, destination_scheme 06:17:38 ) 06:17:38 06:17:38 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 06:17:38 # have to copy the headers dict so we can safely change it without those 06:17:38 # changes being reflected in anyone else's copy. 06:17:38 if not http_tunnel_required: 06:17:38 headers = headers.copy() # type: ignore[attr-defined] 06:17:38 headers.update(self.proxy_headers) # type: ignore[union-attr] 06:17:38 06:17:38 # Must keep the exception bound to a separate variable or else Python 3 06:17:38 # complains about UnboundLocalError. 06:17:38 err = None 06:17:38 06:17:38 # Keep track of whether we cleanly exited the except block. This 06:17:38 # ensures we do proper cleanup in finally. 06:17:38 clean_exit = False 06:17:38 06:17:38 # Rewind body position, if needed. Record current position 06:17:38 # for future rewinds in the event of a redirect/retry. 06:17:38 body_pos = set_file_position(body, body_pos) 06:17:38 06:17:38 try: 06:17:38 # Request a connection from the queue. 06:17:38 timeout_obj = self._get_timeout(timeout) 06:17:38 conn = self._get_conn(timeout=pool_timeout) 06:17:38 06:17:38 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 06:17:38 06:17:38 # Is this a closed/new connection that requires CONNECT tunnelling? 06:17:38 if self.proxy is not None and http_tunnel_required and conn.is_closed: 06:17:38 try: 06:17:38 self._prepare_proxy(conn) 06:17:38 except (BaseSSLError, OSError, SocketTimeout) as e: 06:17:38 self._raise_timeout( 06:17:38 err=e, url=self.proxy.url, timeout_value=conn.timeout 06:17:38 ) 06:17:38 raise 06:17:38 06:17:38 # If we're going to release the connection in ``finally:``, then 06:17:38 # the response doesn't need to know about the connection. Otherwise 06:17:38 # it will also try to release it and we'll have a double-release 06:17:38 # mess. 06:17:38 response_conn = conn if not release_conn else None 06:17:38 06:17:38 # Make the request on the HTTPConnection object 06:17:38 > response = self._make_request( 06:17:38 conn, 06:17:38 method, 06:17:38 url, 06:17:38 timeout=timeout_obj, 06:17:38 body=body, 06:17:38 headers=headers, 06:17:38 chunked=chunked, 06:17:38 retries=retries, 06:17:38 response_conn=response_conn, 06:17:38 preload_content=preload_content, 06:17:38 decode_content=decode_content, 06:17:38 **response_kw, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 06:17:38 conn.request( 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:494: in request 06:17:38 self.endheaders() 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 06:17:38 self._send_output(message_body, encode_chunked=encode_chunked) 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 06:17:38 self.send(msg) 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 06:17:38 self.connect() 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 06:17:38 self.sock = self._new_conn() 06:17:38 ^^^^^^^^^^^^^^^^ 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = 06:17:38 06:17:38 def _new_conn(self) -> socket.socket: 06:17:38 """Establish a socket connection and set nodelay settings on it. 06:17:38 06:17:38 :return: New socket connection. 06:17:38 """ 06:17:38 try: 06:17:38 sock = connection.create_connection( 06:17:38 (self._dns_host, self.port), 06:17:38 self.timeout, 06:17:38 source_address=self.source_address, 06:17:38 socket_options=self.socket_options, 06:17:38 ) 06:17:38 except socket.gaierror as e: 06:17:38 raise NameResolutionError(self.host, self, e) from e 06:17:38 except SocketTimeout as e: 06:17:38 raise ConnectTimeoutError( 06:17:38 self, 06:17:38 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 06:17:38 ) from e 06:17:38 06:17:38 except OSError as e: 06:17:38 > raise NewConnectionError( 06:17:38 self, f"Failed to establish a new connection: {e}" 06:17:38 ) from e 06:17:38 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 06:17:38 06:17:38 The above exception was the direct cause of the following exception: 06:17:38 06:17:38 self = 06:17:38 request = , stream = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 06:17:38 proxies = OrderedDict() 06:17:38 06:17:38 def send( 06:17:38 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 06:17:38 ): 06:17:38 """Sends PreparedRequest object. Returns Response object. 06:17:38 06:17:38 :param request: The :class:`PreparedRequest ` being sent. 06:17:38 :param stream: (optional) Whether to stream the request content. 06:17:38 :param timeout: (optional) How long to wait for the server to send 06:17:38 data before giving up, as a float, or a :ref:`(connect timeout, 06:17:38 read timeout) ` tuple. 06:17:38 :type timeout: float or tuple or urllib3 Timeout object 06:17:38 :param verify: (optional) Either a boolean, in which case it controls whether 06:17:38 we verify the server's TLS certificate, or a string, in which case it 06:17:38 must be a path to a CA bundle to use 06:17:38 :param cert: (optional) Any user-provided SSL certificate to be trusted. 06:17:38 :param proxies: (optional) The proxies dictionary to apply to the request. 06:17:38 :rtype: requests.Response 06:17:38 """ 06:17:38 06:17:38 try: 06:17:38 conn = self.get_connection_with_tls_context( 06:17:38 request, verify, proxies=proxies, cert=cert 06:17:38 ) 06:17:38 except LocationValueError as e: 06:17:38 raise InvalidURL(e, request=request) 06:17:38 06:17:38 self.cert_verify(conn, request.url, verify, cert) 06:17:38 url = self.request_url(request, proxies) 06:17:38 self.add_headers( 06:17:38 request, 06:17:38 stream=stream, 06:17:38 timeout=timeout, 06:17:38 verify=verify, 06:17:38 cert=cert, 06:17:38 proxies=proxies, 06:17:38 ) 06:17:38 06:17:38 chunked = not (request.body is None or "Content-Length" in request.headers) 06:17:38 06:17:38 if isinstance(timeout, tuple): 06:17:38 try: 06:17:38 connect, read = timeout 06:17:38 timeout = TimeoutSauce(connect=connect, read=read) 06:17:38 except ValueError: 06:17:38 raise ValueError( 06:17:38 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 06:17:38 f"or a single float to set both timeouts to the same value." 06:17:38 ) 06:17:38 elif isinstance(timeout, TimeoutSauce): 06:17:38 pass 06:17:38 else: 06:17:38 timeout = TimeoutSauce(connect=timeout, read=timeout) 06:17:38 06:17:38 try: 06:17:38 > resp = conn.urlopen( 06:17:38 method=request.method, 06:17:38 url=url, 06:17:38 body=request.body, 06:17:38 headers=request.headers, 06:17:38 redirect=False, 06:17:38 assert_same_host=False, 06:17:38 preload_content=False, 06:17:38 decode_content=False, 06:17:38 retries=self.max_retries, 06:17:38 timeout=timeout, 06:17:38 chunked=chunked, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/adapters.py:667: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 06:17:38 retries = retries.increment( 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 06:17:38 method = 'PUT' 06:17:38 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-OC' 06:17:38 response = None 06:17:38 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 06:17:38 _pool = 06:17:38 _stacktrace = 06:17:38 06:17:38 def increment( 06:17:38 self, 06:17:38 method: str | None = None, 06:17:38 url: str | None = None, 06:17:38 response: BaseHTTPResponse | None = None, 06:17:38 error: Exception | None = None, 06:17:38 _pool: ConnectionPool | None = None, 06:17:38 _stacktrace: TracebackType | None = None, 06:17:38 ) -> Self: 06:17:38 """Return a new Retry object with incremented retry counters. 06:17:38 06:17:38 :param response: A response object, or None, if the server did not 06:17:38 return a response. 06:17:38 :type response: :class:`~urllib3.response.BaseHTTPResponse` 06:17:38 :param Exception error: An error encountered during the request, or 06:17:38 None if the response was received successfully. 06:17:38 06:17:38 :return: A new ``Retry`` object. 06:17:38 """ 06:17:38 if self.total is False and error: 06:17:38 # Disabled, indicate to re-raise the error. 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 06:17:38 total = self.total 06:17:38 if total is not None: 06:17:38 total -= 1 06:17:38 06:17:38 connect = self.connect 06:17:38 read = self.read 06:17:38 redirect = self.redirect 06:17:38 status_count = self.status 06:17:38 other = self.other 06:17:38 cause = "unknown" 06:17:38 status = None 06:17:38 redirect_location = None 06:17:38 06:17:38 if error and self._is_connection_error(error): 06:17:38 # Connect retry? 06:17:38 if connect is False: 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 elif connect is not None: 06:17:38 connect -= 1 06:17:38 06:17:38 elif error and self._is_read_error(error): 06:17:38 # Read retry? 06:17:38 if read is False or method is None or not self._is_method_retryable(method): 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 elif read is not None: 06:17:38 read -= 1 06:17:38 06:17:38 elif error: 06:17:38 # Other retry? 06:17:38 if other is not None: 06:17:38 other -= 1 06:17:38 06:17:38 elif response and response.get_redirect_location(): 06:17:38 # Redirect retry? 06:17:38 if redirect is not None: 06:17:38 redirect -= 1 06:17:38 cause = "too many redirects" 06:17:38 response_redirect_location = response.get_redirect_location() 06:17:38 if response_redirect_location: 06:17:38 redirect_location = response_redirect_location 06:17:38 status = response.status 06:17:38 06:17:38 else: 06:17:38 # Incrementing because of a server error like a 500 in 06:17:38 # status_forcelist and the given method is in the allowed_methods 06:17:38 cause = ResponseError.GENERIC_ERROR 06:17:38 if response and response.status: 06:17:38 if status_count is not None: 06:17:38 status_count -= 1 06:17:38 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 06:17:38 status = response.status 06:17:38 06:17:38 history = self.history + ( 06:17:38 RequestHistory(method, url, error, status, redirect_location), 06:17:38 ) 06:17:38 06:17:38 new_retry = self.new( 06:17:38 total=total, 06:17:38 connect=connect, 06:17:38 read=read, 06:17:38 redirect=redirect, 06:17:38 status=status_count, 06:17:38 other=other, 06:17:38 history=history, 06:17:38 ) 06:17:38 06:17:38 if new_retry.is_exhausted(): 06:17:38 reason = error or ResponseError(cause) 06:17:38 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8190): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-OC (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 06:17:38 06:17:38 During handling of the above exception, another exception occurred: 06:17:38 06:17:38 self = 06:17:38 06:17:38 def test_03_xpdr_device_connection(self): 06:17:38 > response = test_utils.mount_device("XPDR-OC", 06:17:38 ('oc-mpdr', self.NODE_VERSION)) 06:17:38 06:17:38 transportpce_tests/oc/test01_portmapping.py:56: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 transportpce_tests/common/test_utils.py:362: in mount_device 06:17:38 response = put_request(url[RESTCONF_VERSION].format('{}', node), body) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 transportpce_tests/common/test_utils.py:124: in put_request 06:17:38 return requests.request( 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/api.py:59: in request 06:17:38 return session.request(method=method, url=url, **kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/sessions.py:589: in request 06:17:38 resp = self.send(prep, **send_kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/sessions.py:703: in send 06:17:38 r = adapter.send(request, **kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = 06:17:38 request = , stream = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 06:17:38 proxies = OrderedDict() 06:17:38 06:17:38 def send( 06:17:38 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 06:17:38 ): 06:17:38 """Sends PreparedRequest object. Returns Response object. 06:17:38 06:17:38 :param request: The :class:`PreparedRequest ` being sent. 06:17:38 :param stream: (optional) Whether to stream the request content. 06:17:38 :param timeout: (optional) How long to wait for the server to send 06:17:38 data before giving up, as a float, or a :ref:`(connect timeout, 06:17:38 read timeout) ` tuple. 06:17:38 :type timeout: float or tuple or urllib3 Timeout object 06:17:38 :param verify: (optional) Either a boolean, in which case it controls whether 06:17:38 we verify the server's TLS certificate, or a string, in which case it 06:17:38 must be a path to a CA bundle to use 06:17:38 :param cert: (optional) Any user-provided SSL certificate to be trusted. 06:17:38 :param proxies: (optional) The proxies dictionary to apply to the request. 06:17:38 :rtype: requests.Response 06:17:38 """ 06:17:38 06:17:38 try: 06:17:38 conn = self.get_connection_with_tls_context( 06:17:38 request, verify, proxies=proxies, cert=cert 06:17:38 ) 06:17:38 except LocationValueError as e: 06:17:38 raise InvalidURL(e, request=request) 06:17:38 06:17:38 self.cert_verify(conn, request.url, verify, cert) 06:17:38 url = self.request_url(request, proxies) 06:17:38 self.add_headers( 06:17:38 request, 06:17:38 stream=stream, 06:17:38 timeout=timeout, 06:17:38 verify=verify, 06:17:38 cert=cert, 06:17:38 proxies=proxies, 06:17:38 ) 06:17:38 06:17:38 chunked = not (request.body is None or "Content-Length" in request.headers) 06:17:38 06:17:38 if isinstance(timeout, tuple): 06:17:38 try: 06:17:38 connect, read = timeout 06:17:38 timeout = TimeoutSauce(connect=connect, read=read) 06:17:38 except ValueError: 06:17:38 raise ValueError( 06:17:38 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 06:17:38 f"or a single float to set both timeouts to the same value." 06:17:38 ) 06:17:38 elif isinstance(timeout, TimeoutSauce): 06:17:38 pass 06:17:38 else: 06:17:38 timeout = TimeoutSauce(connect=timeout, read=timeout) 06:17:38 06:17:38 try: 06:17:38 resp = conn.urlopen( 06:17:38 method=request.method, 06:17:38 url=url, 06:17:38 body=request.body, 06:17:38 headers=request.headers, 06:17:38 redirect=False, 06:17:38 assert_same_host=False, 06:17:38 preload_content=False, 06:17:38 decode_content=False, 06:17:38 retries=self.max_retries, 06:17:38 timeout=timeout, 06:17:38 chunked=chunked, 06:17:38 ) 06:17:38 06:17:38 except (ProtocolError, OSError) as err: 06:17:38 raise ConnectionError(err, request=request) 06:17:38 06:17:38 except MaxRetryError as e: 06:17:38 if isinstance(e.reason, ConnectTimeoutError): 06:17:38 # TODO: Remove this in 3.0.0: see #2811 06:17:38 if not isinstance(e.reason, NewConnectionError): 06:17:38 raise ConnectTimeout(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, ResponseError): 06:17:38 raise RetryError(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, _ProxyError): 06:17:38 raise ProxyError(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, _SSLError): 06:17:38 # This branch is for urllib3 v1.22 and later. 06:17:38 raise SSLError(e, request=request) 06:17:38 06:17:38 > raise ConnectionError(e, request=request) 06:17:38 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8190): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-OC (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 06:17:38 ----------------------------- Captured stdout call ----------------------------- 06:17:38 execution of test_03_xpdr_device_connection 06:17:38 ________ TransportpceOCPortMappingTesting.test_04_xpdr_device_connected ________ 06:17:38 06:17:38 self = 06:17:38 06:17:38 def _new_conn(self) -> socket.socket: 06:17:38 """Establish a socket connection and set nodelay settings on it. 06:17:38 06:17:38 :return: New socket connection. 06:17:38 """ 06:17:38 try: 06:17:38 > sock = connection.create_connection( 06:17:38 (self._dns_host, self.port), 06:17:38 self.timeout, 06:17:38 source_address=self.source_address, 06:17:38 socket_options=self.socket_options, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:198: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 06:17:38 raise err 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 address = ('localhost', 8190), timeout = 30, source_address = None 06:17:38 socket_options = [(6, 1, 1)] 06:17:38 06:17:38 def create_connection( 06:17:38 address: tuple[str, int], 06:17:38 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 06:17:38 source_address: tuple[str, int] | None = None, 06:17:38 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 06:17:38 ) -> socket.socket: 06:17:38 """Connect to *address* and return the socket object. 06:17:38 06:17:38 Convenience function. Connect to *address* (a 2-tuple ``(host, 06:17:38 port)``) and return the socket object. Passing the optional 06:17:38 *timeout* parameter will set the timeout on the socket instance 06:17:38 before attempting to connect. If no *timeout* is supplied, the 06:17:38 global default timeout setting returned by :func:`socket.getdefaulttimeout` 06:17:38 is used. If *source_address* is set it must be a tuple of (host, port) 06:17:38 for the socket to bind as a source address before making the connection. 06:17:38 An host of '' or port 0 tells the OS to use the default. 06:17:38 """ 06:17:38 06:17:38 host, port = address 06:17:38 if host.startswith("["): 06:17:38 host = host.strip("[]") 06:17:38 err = None 06:17:38 06:17:38 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 06:17:38 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 06:17:38 # The original create_connection function always returns all records. 06:17:38 family = allowed_gai_family() 06:17:38 06:17:38 try: 06:17:38 host.encode("idna") 06:17:38 except UnicodeError: 06:17:38 raise LocationParseError(f"'{host}', label empty or too long") from None 06:17:38 06:17:38 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 06:17:38 af, socktype, proto, canonname, sa = res 06:17:38 sock = None 06:17:38 try: 06:17:38 sock = socket.socket(af, socktype, proto) 06:17:38 06:17:38 # If provided, set socket level options before connecting. 06:17:38 _set_socket_options(sock, socket_options) 06:17:38 06:17:38 if timeout is not _DEFAULT_TIMEOUT: 06:17:38 sock.settimeout(timeout) 06:17:38 if source_address: 06:17:38 sock.bind(source_address) 06:17:38 > sock.connect(sa) 06:17:38 E ConnectionRefusedError: [Errno 111] Connection refused 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 06:17:38 06:17:38 The above exception was the direct cause of the following exception: 06:17:38 06:17:38 self = 06:17:38 method = 'GET' 06:17:38 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-OC?content=nonconfig' 06:17:38 body = None 06:17:38 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 06:17:38 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 06:17:38 redirect = False, assert_same_host = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 06:17:38 release_conn = False, chunked = False, body_pos = None, preload_content = False 06:17:38 decode_content = False, response_kw = {} 06:17:38 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-OC', query='content=nonconfig', fragment=None) 06:17:38 destination_scheme = None, conn = None, release_this_conn = True 06:17:38 http_tunnel_required = False, err = None, clean_exit = False 06:17:38 06:17:38 def urlopen( # type: ignore[override] 06:17:38 self, 06:17:38 method: str, 06:17:38 url: str, 06:17:38 body: _TYPE_BODY | None = None, 06:17:38 headers: typing.Mapping[str, str] | None = None, 06:17:38 retries: Retry | bool | int | None = None, 06:17:38 redirect: bool = True, 06:17:38 assert_same_host: bool = True, 06:17:38 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 06:17:38 pool_timeout: int | None = None, 06:17:38 release_conn: bool | None = None, 06:17:38 chunked: bool = False, 06:17:38 body_pos: _TYPE_BODY_POSITION | None = None, 06:17:38 preload_content: bool = True, 06:17:38 decode_content: bool = True, 06:17:38 **response_kw: typing.Any, 06:17:38 ) -> BaseHTTPResponse: 06:17:38 """ 06:17:38 Get a connection from the pool and perform an HTTP request. This is the 06:17:38 lowest level call for making a request, so you'll need to specify all 06:17:38 the raw details. 06:17:38 06:17:38 .. note:: 06:17:38 06:17:38 More commonly, it's appropriate to use a convenience method 06:17:38 such as :meth:`request`. 06:17:38 06:17:38 .. note:: 06:17:38 06:17:38 `release_conn` will only behave as expected if 06:17:38 `preload_content=False` because we want to make 06:17:38 `preload_content=False` the default behaviour someday soon without 06:17:38 breaking backwards compatibility. 06:17:38 06:17:38 :param method: 06:17:38 HTTP request method (such as GET, POST, PUT, etc.) 06:17:38 06:17:38 :param url: 06:17:38 The URL to perform the request on. 06:17:38 06:17:38 :param body: 06:17:38 Data to send in the request body, either :class:`str`, :class:`bytes`, 06:17:38 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 06:17:38 06:17:38 :param headers: 06:17:38 Dictionary of custom headers to send, such as User-Agent, 06:17:38 If-None-Match, etc. If None, pool headers are used. If provided, 06:17:38 these headers completely replace any pool-specific headers. 06:17:38 06:17:38 :param retries: 06:17:38 Configure the number of retries to allow before raising a 06:17:38 :class:`~urllib3.exceptions.MaxRetryError` exception. 06:17:38 06:17:38 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 06:17:38 :class:`~urllib3.util.retry.Retry` object for fine-grained control 06:17:38 over different types of retries. 06:17:38 Pass an integer number to retry connection errors that many times, 06:17:38 but no other types of errors. Pass zero to never retry. 06:17:38 06:17:38 If ``False``, then retries are disabled and any exception is raised 06:17:38 immediately. Also, instead of raising a MaxRetryError on redirects, 06:17:38 the redirect response will be returned. 06:17:38 06:17:38 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 06:17:38 06:17:38 :param redirect: 06:17:38 If True, automatically handle redirects (status codes 301, 302, 06:17:38 303, 307, 308). Each redirect counts as a retry. Disabling retries 06:17:38 will disable redirect, too. 06:17:38 06:17:38 :param assert_same_host: 06:17:38 If ``True``, will make sure that the host of the pool requests is 06:17:38 consistent else will raise HostChangedError. When ``False``, you can 06:17:38 use the pool on an HTTP proxy and request foreign hosts. 06:17:38 06:17:38 :param timeout: 06:17:38 If specified, overrides the default timeout for this one 06:17:38 request. It may be a float (in seconds) or an instance of 06:17:38 :class:`urllib3.util.Timeout`. 06:17:38 06:17:38 :param pool_timeout: 06:17:38 If set and the pool is set to block=True, then this method will 06:17:38 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 06:17:38 connection is available within the time period. 06:17:38 06:17:38 :param bool preload_content: 06:17:38 If True, the response's body will be preloaded into memory. 06:17:38 06:17:38 :param bool decode_content: 06:17:38 If True, will attempt to decode the body based on the 06:17:38 'content-encoding' header. 06:17:38 06:17:38 :param release_conn: 06:17:38 If False, then the urlopen call will not release the connection 06:17:38 back into the pool once a response is received (but will release if 06:17:38 you read the entire contents of the response such as when 06:17:38 `preload_content=True`). This is useful if you're not preloading 06:17:38 the response's content immediately. You will need to call 06:17:38 ``r.release_conn()`` on the response ``r`` to return the connection 06:17:38 back into the pool. If None, it takes the value of ``preload_content`` 06:17:38 which defaults to ``True``. 06:17:38 06:17:38 :param bool chunked: 06:17:38 If True, urllib3 will send the body using chunked transfer 06:17:38 encoding. Otherwise, urllib3 will send the body using the standard 06:17:38 content-length form. Defaults to False. 06:17:38 06:17:38 :param int body_pos: 06:17:38 Position to seek to in file-like body in the event of a retry or 06:17:38 redirect. Typically this won't need to be set because urllib3 will 06:17:38 auto-populate the value when needed. 06:17:38 """ 06:17:38 parsed_url = parse_url(url) 06:17:38 destination_scheme = parsed_url.scheme 06:17:38 06:17:38 if headers is None: 06:17:38 headers = self.headers 06:17:38 06:17:38 if not isinstance(retries, Retry): 06:17:38 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 06:17:38 06:17:38 if release_conn is None: 06:17:38 release_conn = preload_content 06:17:38 06:17:38 # Check host 06:17:38 if assert_same_host and not self.is_same_host(url): 06:17:38 raise HostChangedError(self, url, retries) 06:17:38 06:17:38 # Ensure that the URL we're connecting to is properly encoded 06:17:38 if url.startswith("/"): 06:17:38 url = to_str(_encode_target(url)) 06:17:38 else: 06:17:38 url = to_str(parsed_url.url) 06:17:38 06:17:38 conn = None 06:17:38 06:17:38 # Track whether `conn` needs to be released before 06:17:38 # returning/raising/recursing. Update this variable if necessary, and 06:17:38 # leave `release_conn` constant throughout the function. That way, if 06:17:38 # the function recurses, the original value of `release_conn` will be 06:17:38 # passed down into the recursive call, and its value will be respected. 06:17:38 # 06:17:38 # See issue #651 [1] for details. 06:17:38 # 06:17:38 # [1] 06:17:38 release_this_conn = release_conn 06:17:38 06:17:38 http_tunnel_required = connection_requires_http_tunnel( 06:17:38 self.proxy, self.proxy_config, destination_scheme 06:17:38 ) 06:17:38 06:17:38 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 06:17:38 # have to copy the headers dict so we can safely change it without those 06:17:38 # changes being reflected in anyone else's copy. 06:17:38 if not http_tunnel_required: 06:17:38 headers = headers.copy() # type: ignore[attr-defined] 06:17:38 headers.update(self.proxy_headers) # type: ignore[union-attr] 06:17:38 06:17:38 # Must keep the exception bound to a separate variable or else Python 3 06:17:38 # complains about UnboundLocalError. 06:17:38 err = None 06:17:38 06:17:38 # Keep track of whether we cleanly exited the except block. This 06:17:38 # ensures we do proper cleanup in finally. 06:17:38 clean_exit = False 06:17:38 06:17:38 # Rewind body position, if needed. Record current position 06:17:38 # for future rewinds in the event of a redirect/retry. 06:17:38 body_pos = set_file_position(body, body_pos) 06:17:38 06:17:38 try: 06:17:38 # Request a connection from the queue. 06:17:38 timeout_obj = self._get_timeout(timeout) 06:17:38 conn = self._get_conn(timeout=pool_timeout) 06:17:38 06:17:38 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 06:17:38 06:17:38 # Is this a closed/new connection that requires CONNECT tunnelling? 06:17:38 if self.proxy is not None and http_tunnel_required and conn.is_closed: 06:17:38 try: 06:17:38 self._prepare_proxy(conn) 06:17:38 except (BaseSSLError, OSError, SocketTimeout) as e: 06:17:38 self._raise_timeout( 06:17:38 err=e, url=self.proxy.url, timeout_value=conn.timeout 06:17:38 ) 06:17:38 raise 06:17:38 06:17:38 # If we're going to release the connection in ``finally:``, then 06:17:38 # the response doesn't need to know about the connection. Otherwise 06:17:38 # it will also try to release it and we'll have a double-release 06:17:38 # mess. 06:17:38 response_conn = conn if not release_conn else None 06:17:38 06:17:38 # Make the request on the HTTPConnection object 06:17:38 > response = self._make_request( 06:17:38 conn, 06:17:38 method, 06:17:38 url, 06:17:38 timeout=timeout_obj, 06:17:38 body=body, 06:17:38 headers=headers, 06:17:38 chunked=chunked, 06:17:38 retries=retries, 06:17:38 response_conn=response_conn, 06:17:38 preload_content=preload_content, 06:17:38 decode_content=decode_content, 06:17:38 **response_kw, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 06:17:38 conn.request( 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:494: in request 06:17:38 self.endheaders() 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 06:17:38 self._send_output(message_body, encode_chunked=encode_chunked) 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 06:17:38 self.send(msg) 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 06:17:38 self.connect() 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 06:17:38 self.sock = self._new_conn() 06:17:38 ^^^^^^^^^^^^^^^^ 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = 06:17:38 06:17:38 def _new_conn(self) -> socket.socket: 06:17:38 """Establish a socket connection and set nodelay settings on it. 06:17:38 06:17:38 :return: New socket connection. 06:17:38 """ 06:17:38 try: 06:17:38 sock = connection.create_connection( 06:17:38 (self._dns_host, self.port), 06:17:38 self.timeout, 06:17:38 source_address=self.source_address, 06:17:38 socket_options=self.socket_options, 06:17:38 ) 06:17:38 except socket.gaierror as e: 06:17:38 raise NameResolutionError(self.host, self, e) from e 06:17:38 except SocketTimeout as e: 06:17:38 raise ConnectTimeoutError( 06:17:38 self, 06:17:38 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 06:17:38 ) from e 06:17:38 06:17:38 except OSError as e: 06:17:38 > raise NewConnectionError( 06:17:38 self, f"Failed to establish a new connection: {e}" 06:17:38 ) from e 06:17:38 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 06:17:38 06:17:38 The above exception was the direct cause of the following exception: 06:17:38 06:17:38 self = 06:17:38 request = , stream = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 06:17:38 proxies = OrderedDict() 06:17:38 06:17:38 def send( 06:17:38 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 06:17:38 ): 06:17:38 """Sends PreparedRequest object. Returns Response object. 06:17:38 06:17:38 :param request: The :class:`PreparedRequest ` being sent. 06:17:38 :param stream: (optional) Whether to stream the request content. 06:17:38 :param timeout: (optional) How long to wait for the server to send 06:17:38 data before giving up, as a float, or a :ref:`(connect timeout, 06:17:38 read timeout) ` tuple. 06:17:38 :type timeout: float or tuple or urllib3 Timeout object 06:17:38 :param verify: (optional) Either a boolean, in which case it controls whether 06:17:38 we verify the server's TLS certificate, or a string, in which case it 06:17:38 must be a path to a CA bundle to use 06:17:38 :param cert: (optional) Any user-provided SSL certificate to be trusted. 06:17:38 :param proxies: (optional) The proxies dictionary to apply to the request. 06:17:38 :rtype: requests.Response 06:17:38 """ 06:17:38 06:17:38 try: 06:17:38 conn = self.get_connection_with_tls_context( 06:17:38 request, verify, proxies=proxies, cert=cert 06:17:38 ) 06:17:38 except LocationValueError as e: 06:17:38 raise InvalidURL(e, request=request) 06:17:38 06:17:38 self.cert_verify(conn, request.url, verify, cert) 06:17:38 url = self.request_url(request, proxies) 06:17:38 self.add_headers( 06:17:38 request, 06:17:38 stream=stream, 06:17:38 timeout=timeout, 06:17:38 verify=verify, 06:17:38 cert=cert, 06:17:38 proxies=proxies, 06:17:38 ) 06:17:38 06:17:38 chunked = not (request.body is None or "Content-Length" in request.headers) 06:17:38 06:17:38 if isinstance(timeout, tuple): 06:17:38 try: 06:17:38 connect, read = timeout 06:17:38 timeout = TimeoutSauce(connect=connect, read=read) 06:17:38 except ValueError: 06:17:38 raise ValueError( 06:17:38 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 06:17:38 f"or a single float to set both timeouts to the same value." 06:17:38 ) 06:17:38 elif isinstance(timeout, TimeoutSauce): 06:17:38 pass 06:17:38 else: 06:17:38 timeout = TimeoutSauce(connect=timeout, read=timeout) 06:17:38 06:17:38 try: 06:17:38 > resp = conn.urlopen( 06:17:38 method=request.method, 06:17:38 url=url, 06:17:38 body=request.body, 06:17:38 headers=request.headers, 06:17:38 redirect=False, 06:17:38 assert_same_host=False, 06:17:38 preload_content=False, 06:17:38 decode_content=False, 06:17:38 retries=self.max_retries, 06:17:38 timeout=timeout, 06:17:38 chunked=chunked, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/adapters.py:667: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 06:17:38 retries = retries.increment( 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 06:17:38 method = 'GET' 06:17:38 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-OC?content=nonconfig' 06:17:38 response = None 06:17:38 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 06:17:38 _pool = 06:17:38 _stacktrace = 06:17:38 06:17:38 def increment( 06:17:38 self, 06:17:38 method: str | None = None, 06:17:38 url: str | None = None, 06:17:38 response: BaseHTTPResponse | None = None, 06:17:38 error: Exception | None = None, 06:17:38 _pool: ConnectionPool | None = None, 06:17:38 _stacktrace: TracebackType | None = None, 06:17:38 ) -> Self: 06:17:38 """Return a new Retry object with incremented retry counters. 06:17:38 06:17:38 :param response: A response object, or None, if the server did not 06:17:38 return a response. 06:17:38 :type response: :class:`~urllib3.response.BaseHTTPResponse` 06:17:38 :param Exception error: An error encountered during the request, or 06:17:38 None if the response was received successfully. 06:17:38 06:17:38 :return: A new ``Retry`` object. 06:17:38 """ 06:17:38 if self.total is False and error: 06:17:38 # Disabled, indicate to re-raise the error. 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 06:17:38 total = self.total 06:17:38 if total is not None: 06:17:38 total -= 1 06:17:38 06:17:38 connect = self.connect 06:17:38 read = self.read 06:17:38 redirect = self.redirect 06:17:38 status_count = self.status 06:17:38 other = self.other 06:17:38 cause = "unknown" 06:17:38 status = None 06:17:38 redirect_location = None 06:17:38 06:17:38 if error and self._is_connection_error(error): 06:17:38 # Connect retry? 06:17:38 if connect is False: 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 elif connect is not None: 06:17:38 connect -= 1 06:17:38 06:17:38 elif error and self._is_read_error(error): 06:17:38 # Read retry? 06:17:38 if read is False or method is None or not self._is_method_retryable(method): 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 elif read is not None: 06:17:38 read -= 1 06:17:38 06:17:38 elif error: 06:17:38 # Other retry? 06:17:38 if other is not None: 06:17:38 other -= 1 06:17:38 06:17:38 elif response and response.get_redirect_location(): 06:17:38 # Redirect retry? 06:17:38 if redirect is not None: 06:17:38 redirect -= 1 06:17:38 cause = "too many redirects" 06:17:38 response_redirect_location = response.get_redirect_location() 06:17:38 if response_redirect_location: 06:17:38 redirect_location = response_redirect_location 06:17:38 status = response.status 06:17:38 06:17:38 else: 06:17:38 # Incrementing because of a server error like a 500 in 06:17:38 # status_forcelist and the given method is in the allowed_methods 06:17:38 cause = ResponseError.GENERIC_ERROR 06:17:38 if response and response.status: 06:17:38 if status_count is not None: 06:17:38 status_count -= 1 06:17:38 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 06:17:38 status = response.status 06:17:38 06:17:38 history = self.history + ( 06:17:38 RequestHistory(method, url, error, status, redirect_location), 06:17:38 ) 06:17:38 06:17:38 new_retry = self.new( 06:17:38 total=total, 06:17:38 connect=connect, 06:17:38 read=read, 06:17:38 redirect=redirect, 06:17:38 status=status_count, 06:17:38 other=other, 06:17:38 history=history, 06:17:38 ) 06:17:38 06:17:38 if new_retry.is_exhausted(): 06:17:38 reason = error or ResponseError(cause) 06:17:38 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8190): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-OC?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 06:17:38 06:17:38 During handling of the above exception, another exception occurred: 06:17:38 06:17:38 self = 06:17:38 06:17:38 def test_04_xpdr_device_connected(self): 06:17:38 > response = test_utils.check_device_connection("XPDR-OC") 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 06:17:38 transportpce_tests/oc/test01_portmapping.py:62: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 transportpce_tests/common/test_utils.py:390: in check_device_connection 06:17:38 response = get_request(url[RESTCONF_VERSION].format('{}', node)) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 transportpce_tests/common/test_utils.py:116: in get_request 06:17:38 return requests.request( 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/api.py:59: in request 06:17:38 return session.request(method=method, url=url, **kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/sessions.py:589: in request 06:17:38 resp = self.send(prep, **send_kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/sessions.py:703: in send 06:17:38 r = adapter.send(request, **kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = 06:17:38 request = , stream = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 06:17:38 proxies = OrderedDict() 06:17:38 06:17:38 def send( 06:17:38 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 06:17:38 ): 06:17:38 """Sends PreparedRequest object. Returns Response object. 06:17:38 06:17:38 :param request: The :class:`PreparedRequest ` being sent. 06:17:38 :param stream: (optional) Whether to stream the request content. 06:17:38 :param timeout: (optional) How long to wait for the server to send 06:17:38 data before giving up, as a float, or a :ref:`(connect timeout, 06:17:38 read timeout) ` tuple. 06:17:38 :type timeout: float or tuple or urllib3 Timeout object 06:17:38 :param verify: (optional) Either a boolean, in which case it controls whether 06:17:38 we verify the server's TLS certificate, or a string, in which case it 06:17:38 must be a path to a CA bundle to use 06:17:38 :param cert: (optional) Any user-provided SSL certificate to be trusted. 06:17:38 :param proxies: (optional) The proxies dictionary to apply to the request. 06:17:38 :rtype: requests.Response 06:17:38 """ 06:17:38 06:17:38 try: 06:17:38 conn = self.get_connection_with_tls_context( 06:17:38 request, verify, proxies=proxies, cert=cert 06:17:38 ) 06:17:38 except LocationValueError as e: 06:17:38 raise InvalidURL(e, request=request) 06:17:38 06:17:38 self.cert_verify(conn, request.url, verify, cert) 06:17:38 url = self.request_url(request, proxies) 06:17:38 self.add_headers( 06:17:38 request, 06:17:38 stream=stream, 06:17:38 timeout=timeout, 06:17:38 verify=verify, 06:17:38 cert=cert, 06:17:38 proxies=proxies, 06:17:38 ) 06:17:38 06:17:38 chunked = not (request.body is None or "Content-Length" in request.headers) 06:17:38 06:17:38 if isinstance(timeout, tuple): 06:17:38 try: 06:17:38 connect, read = timeout 06:17:38 timeout = TimeoutSauce(connect=connect, read=read) 06:17:38 except ValueError: 06:17:38 raise ValueError( 06:17:38 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 06:17:38 f"or a single float to set both timeouts to the same value." 06:17:38 ) 06:17:38 elif isinstance(timeout, TimeoutSauce): 06:17:38 pass 06:17:38 else: 06:17:38 timeout = TimeoutSauce(connect=timeout, read=timeout) 06:17:38 06:17:38 try: 06:17:38 resp = conn.urlopen( 06:17:38 method=request.method, 06:17:38 url=url, 06:17:38 body=request.body, 06:17:38 headers=request.headers, 06:17:38 redirect=False, 06:17:38 assert_same_host=False, 06:17:38 preload_content=False, 06:17:38 decode_content=False, 06:17:38 retries=self.max_retries, 06:17:38 timeout=timeout, 06:17:38 chunked=chunked, 06:17:38 ) 06:17:38 06:17:38 except (ProtocolError, OSError) as err: 06:17:38 raise ConnectionError(err, request=request) 06:17:38 06:17:38 except MaxRetryError as e: 06:17:38 if isinstance(e.reason, ConnectTimeoutError): 06:17:38 # TODO: Remove this in 3.0.0: see #2811 06:17:38 if not isinstance(e.reason, NewConnectionError): 06:17:38 raise ConnectTimeout(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, ResponseError): 06:17:38 raise RetryError(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, _ProxyError): 06:17:38 raise ProxyError(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, _SSLError): 06:17:38 # This branch is for urllib3 v1.22 and later. 06:17:38 raise SSLError(e, request=request) 06:17:38 06:17:38 > raise ConnectionError(e, request=request) 06:17:38 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8190): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-OC?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 06:17:38 ----------------------------- Captured stdout call ----------------------------- 06:17:38 execution of test_04_xpdr_device_connected 06:17:38 ________ TransportpceOCPortMappingTesting.test_05_xpdr_portmapping_info ________ 06:17:38 06:17:38 self = 06:17:38 06:17:38 def _new_conn(self) -> socket.socket: 06:17:38 """Establish a socket connection and set nodelay settings on it. 06:17:38 06:17:38 :return: New socket connection. 06:17:38 """ 06:17:38 try: 06:17:38 > sock = connection.create_connection( 06:17:38 (self._dns_host, self.port), 06:17:38 self.timeout, 06:17:38 source_address=self.source_address, 06:17:38 socket_options=self.socket_options, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:198: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 06:17:38 raise err 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 address = ('localhost', 8190), timeout = 30, source_address = None 06:17:38 socket_options = [(6, 1, 1)] 06:17:38 06:17:38 def create_connection( 06:17:38 address: tuple[str, int], 06:17:38 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 06:17:38 source_address: tuple[str, int] | None = None, 06:17:38 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 06:17:38 ) -> socket.socket: 06:17:38 """Connect to *address* and return the socket object. 06:17:38 06:17:38 Convenience function. Connect to *address* (a 2-tuple ``(host, 06:17:38 port)``) and return the socket object. Passing the optional 06:17:38 *timeout* parameter will set the timeout on the socket instance 06:17:38 before attempting to connect. If no *timeout* is supplied, the 06:17:38 global default timeout setting returned by :func:`socket.getdefaulttimeout` 06:17:38 is used. If *source_address* is set it must be a tuple of (host, port) 06:17:38 for the socket to bind as a source address before making the connection. 06:17:38 An host of '' or port 0 tells the OS to use the default. 06:17:38 """ 06:17:38 06:17:38 host, port = address 06:17:38 if host.startswith("["): 06:17:38 host = host.strip("[]") 06:17:38 err = None 06:17:38 06:17:38 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 06:17:38 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 06:17:38 # The original create_connection function always returns all records. 06:17:38 family = allowed_gai_family() 06:17:38 06:17:38 try: 06:17:38 host.encode("idna") 06:17:38 except UnicodeError: 06:17:38 raise LocationParseError(f"'{host}', label empty or too long") from None 06:17:38 06:17:38 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 06:17:38 af, socktype, proto, canonname, sa = res 06:17:38 sock = None 06:17:38 try: 06:17:38 sock = socket.socket(af, socktype, proto) 06:17:38 06:17:38 # If provided, set socket level options before connecting. 06:17:38 _set_socket_options(sock, socket_options) 06:17:38 06:17:38 if timeout is not _DEFAULT_TIMEOUT: 06:17:38 sock.settimeout(timeout) 06:17:38 if source_address: 06:17:38 sock.bind(source_address) 06:17:38 > sock.connect(sa) 06:17:38 E ConnectionRefusedError: [Errno 111] Connection refused 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 06:17:38 06:17:38 The above exception was the direct cause of the following exception: 06:17:38 06:17:38 self = 06:17:38 method = 'GET' 06:17:38 url = '/rests/data/transportpce-portmapping:network/nodes=XPDR-OC/node-info' 06:17:38 body = None 06:17:38 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 06:17:38 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 06:17:38 redirect = False, assert_same_host = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 06:17:38 release_conn = False, chunked = False, body_pos = None, preload_content = False 06:17:38 decode_content = False, response_kw = {} 06:17:38 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDR-OC/node-info', query=None, fragment=None) 06:17:38 destination_scheme = None, conn = None, release_this_conn = True 06:17:38 http_tunnel_required = False, err = None, clean_exit = False 06:17:38 06:17:38 def urlopen( # type: ignore[override] 06:17:38 self, 06:17:38 method: str, 06:17:38 url: str, 06:17:38 body: _TYPE_BODY | None = None, 06:17:38 headers: typing.Mapping[str, str] | None = None, 06:17:38 retries: Retry | bool | int | None = None, 06:17:38 redirect: bool = True, 06:17:38 assert_same_host: bool = True, 06:17:38 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 06:17:38 pool_timeout: int | None = None, 06:17:38 release_conn: bool | None = None, 06:17:38 chunked: bool = False, 06:17:38 body_pos: _TYPE_BODY_POSITION | None = None, 06:17:38 preload_content: bool = True, 06:17:38 decode_content: bool = True, 06:17:38 **response_kw: typing.Any, 06:17:38 ) -> BaseHTTPResponse: 06:17:38 """ 06:17:38 Get a connection from the pool and perform an HTTP request. This is the 06:17:38 lowest level call for making a request, so you'll need to specify all 06:17:38 the raw details. 06:17:38 06:17:38 .. note:: 06:17:38 06:17:38 More commonly, it's appropriate to use a convenience method 06:17:38 such as :meth:`request`. 06:17:38 06:17:38 .. note:: 06:17:38 06:17:38 `release_conn` will only behave as expected if 06:17:38 `preload_content=False` because we want to make 06:17:38 `preload_content=False` the default behaviour someday soon without 06:17:38 breaking backwards compatibility. 06:17:38 06:17:38 :param method: 06:17:38 HTTP request method (such as GET, POST, PUT, etc.) 06:17:38 06:17:38 :param url: 06:17:38 The URL to perform the request on. 06:17:38 06:17:38 :param body: 06:17:38 Data to send in the request body, either :class:`str`, :class:`bytes`, 06:17:38 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 06:17:38 06:17:38 :param headers: 06:17:38 Dictionary of custom headers to send, such as User-Agent, 06:17:38 If-None-Match, etc. If None, pool headers are used. If provided, 06:17:38 these headers completely replace any pool-specific headers. 06:17:38 06:17:38 :param retries: 06:17:38 Configure the number of retries to allow before raising a 06:17:38 :class:`~urllib3.exceptions.MaxRetryError` exception. 06:17:38 06:17:38 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 06:17:38 :class:`~urllib3.util.retry.Retry` object for fine-grained control 06:17:38 over different types of retries. 06:17:38 Pass an integer number to retry connection errors that many times, 06:17:38 but no other types of errors. Pass zero to never retry. 06:17:38 06:17:38 If ``False``, then retries are disabled and any exception is raised 06:17:38 immediately. Also, instead of raising a MaxRetryError on redirects, 06:17:38 the redirect response will be returned. 06:17:38 06:17:38 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 06:17:38 06:17:38 :param redirect: 06:17:38 If True, automatically handle redirects (status codes 301, 302, 06:17:38 303, 307, 308). Each redirect counts as a retry. Disabling retries 06:17:38 will disable redirect, too. 06:17:38 06:17:38 :param assert_same_host: 06:17:38 If ``True``, will make sure that the host of the pool requests is 06:17:38 consistent else will raise HostChangedError. When ``False``, you can 06:17:38 use the pool on an HTTP proxy and request foreign hosts. 06:17:38 06:17:38 :param timeout: 06:17:38 If specified, overrides the default timeout for this one 06:17:38 request. It may be a float (in seconds) or an instance of 06:17:38 :class:`urllib3.util.Timeout`. 06:17:38 06:17:38 :param pool_timeout: 06:17:38 If set and the pool is set to block=True, then this method will 06:17:38 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 06:17:38 connection is available within the time period. 06:17:38 06:17:38 :param bool preload_content: 06:17:38 If True, the response's body will be preloaded into memory. 06:17:38 06:17:38 :param bool decode_content: 06:17:38 If True, will attempt to decode the body based on the 06:17:38 'content-encoding' header. 06:17:38 06:17:38 :param release_conn: 06:17:38 If False, then the urlopen call will not release the connection 06:17:38 back into the pool once a response is received (but will release if 06:17:38 you read the entire contents of the response such as when 06:17:38 `preload_content=True`). This is useful if you're not preloading 06:17:38 the response's content immediately. You will need to call 06:17:38 ``r.release_conn()`` on the response ``r`` to return the connection 06:17:38 back into the pool. If None, it takes the value of ``preload_content`` 06:17:38 which defaults to ``True``. 06:17:38 06:17:38 :param bool chunked: 06:17:38 If True, urllib3 will send the body using chunked transfer 06:17:38 encoding. Otherwise, urllib3 will send the body using the standard 06:17:38 content-length form. Defaults to False. 06:17:38 06:17:38 :param int body_pos: 06:17:38 Position to seek to in file-like body in the event of a retry or 06:17:38 redirect. Typically this won't need to be set because urllib3 will 06:17:38 auto-populate the value when needed. 06:17:38 """ 06:17:38 parsed_url = parse_url(url) 06:17:38 destination_scheme = parsed_url.scheme 06:17:38 06:17:38 if headers is None: 06:17:38 headers = self.headers 06:17:38 06:17:38 if not isinstance(retries, Retry): 06:17:38 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 06:17:38 06:17:38 if release_conn is None: 06:17:38 release_conn = preload_content 06:17:38 06:17:38 # Check host 06:17:38 if assert_same_host and not self.is_same_host(url): 06:17:38 raise HostChangedError(self, url, retries) 06:17:38 06:17:38 # Ensure that the URL we're connecting to is properly encoded 06:17:38 if url.startswith("/"): 06:17:38 url = to_str(_encode_target(url)) 06:17:38 else: 06:17:38 url = to_str(parsed_url.url) 06:17:38 06:17:38 conn = None 06:17:38 06:17:38 # Track whether `conn` needs to be released before 06:17:38 # returning/raising/recursing. Update this variable if necessary, and 06:17:38 # leave `release_conn` constant throughout the function. That way, if 06:17:38 # the function recurses, the original value of `release_conn` will be 06:17:38 # passed down into the recursive call, and its value will be respected. 06:17:38 # 06:17:38 # See issue #651 [1] for details. 06:17:38 # 06:17:38 # [1] 06:17:38 release_this_conn = release_conn 06:17:38 06:17:38 http_tunnel_required = connection_requires_http_tunnel( 06:17:38 self.proxy, self.proxy_config, destination_scheme 06:17:38 ) 06:17:38 06:17:38 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 06:17:38 # have to copy the headers dict so we can safely change it without those 06:17:38 # changes being reflected in anyone else's copy. 06:17:38 if not http_tunnel_required: 06:17:38 headers = headers.copy() # type: ignore[attr-defined] 06:17:38 headers.update(self.proxy_headers) # type: ignore[union-attr] 06:17:38 06:17:38 # Must keep the exception bound to a separate variable or else Python 3 06:17:38 # complains about UnboundLocalError. 06:17:38 err = None 06:17:38 06:17:38 # Keep track of whether we cleanly exited the except block. This 06:17:38 # ensures we do proper cleanup in finally. 06:17:38 clean_exit = False 06:17:38 06:17:38 # Rewind body position, if needed. Record current position 06:17:38 # for future rewinds in the event of a redirect/retry. 06:17:38 body_pos = set_file_position(body, body_pos) 06:17:38 06:17:38 try: 06:17:38 # Request a connection from the queue. 06:17:38 timeout_obj = self._get_timeout(timeout) 06:17:38 conn = self._get_conn(timeout=pool_timeout) 06:17:38 06:17:38 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 06:17:38 06:17:38 # Is this a closed/new connection that requires CONNECT tunnelling? 06:17:38 if self.proxy is not None and http_tunnel_required and conn.is_closed: 06:17:38 try: 06:17:38 self._prepare_proxy(conn) 06:17:38 except (BaseSSLError, OSError, SocketTimeout) as e: 06:17:38 self._raise_timeout( 06:17:38 err=e, url=self.proxy.url, timeout_value=conn.timeout 06:17:38 ) 06:17:38 raise 06:17:38 06:17:38 # If we're going to release the connection in ``finally:``, then 06:17:38 # the response doesn't need to know about the connection. Otherwise 06:17:38 # it will also try to release it and we'll have a double-release 06:17:38 # mess. 06:17:38 response_conn = conn if not release_conn else None 06:17:38 06:17:38 # Make the request on the HTTPConnection object 06:17:38 > response = self._make_request( 06:17:38 conn, 06:17:38 method, 06:17:38 url, 06:17:38 timeout=timeout_obj, 06:17:38 body=body, 06:17:38 headers=headers, 06:17:38 chunked=chunked, 06:17:38 retries=retries, 06:17:38 response_conn=response_conn, 06:17:38 preload_content=preload_content, 06:17:38 decode_content=decode_content, 06:17:38 **response_kw, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 06:17:38 conn.request( 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:494: in request 06:17:38 self.endheaders() 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 06:17:38 self._send_output(message_body, encode_chunked=encode_chunked) 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 06:17:38 self.send(msg) 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 06:17:38 self.connect() 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 06:17:38 self.sock = self._new_conn() 06:17:38 ^^^^^^^^^^^^^^^^ 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = 06:17:38 06:17:38 def _new_conn(self) -> socket.socket: 06:17:38 """Establish a socket connection and set nodelay settings on it. 06:17:38 06:17:38 :return: New socket connection. 06:17:38 """ 06:17:38 try: 06:17:38 sock = connection.create_connection( 06:17:38 (self._dns_host, self.port), 06:17:38 self.timeout, 06:17:38 source_address=self.source_address, 06:17:38 socket_options=self.socket_options, 06:17:38 ) 06:17:38 except socket.gaierror as e: 06:17:38 raise NameResolutionError(self.host, self, e) from e 06:17:38 except SocketTimeout as e: 06:17:38 raise ConnectTimeoutError( 06:17:38 self, 06:17:38 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 06:17:38 ) from e 06:17:38 06:17:38 except OSError as e: 06:17:38 > raise NewConnectionError( 06:17:38 self, f"Failed to establish a new connection: {e}" 06:17:38 ) from e 06:17:38 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 06:17:38 06:17:38 The above exception was the direct cause of the following exception: 06:17:38 06:17:38 self = 06:17:38 request = , stream = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 06:17:38 proxies = OrderedDict() 06:17:38 06:17:38 def send( 06:17:38 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 06:17:38 ): 06:17:38 """Sends PreparedRequest object. Returns Response object. 06:17:38 06:17:38 :param request: The :class:`PreparedRequest ` being sent. 06:17:38 :param stream: (optional) Whether to stream the request content. 06:17:38 :param timeout: (optional) How long to wait for the server to send 06:17:38 data before giving up, as a float, or a :ref:`(connect timeout, 06:17:38 read timeout) ` tuple. 06:17:38 :type timeout: float or tuple or urllib3 Timeout object 06:17:38 :param verify: (optional) Either a boolean, in which case it controls whether 06:17:38 we verify the server's TLS certificate, or a string, in which case it 06:17:38 must be a path to a CA bundle to use 06:17:38 :param cert: (optional) Any user-provided SSL certificate to be trusted. 06:17:38 :param proxies: (optional) The proxies dictionary to apply to the request. 06:17:38 :rtype: requests.Response 06:17:38 """ 06:17:38 06:17:38 try: 06:17:38 conn = self.get_connection_with_tls_context( 06:17:38 request, verify, proxies=proxies, cert=cert 06:17:38 ) 06:17:38 except LocationValueError as e: 06:17:38 raise InvalidURL(e, request=request) 06:17:38 06:17:38 self.cert_verify(conn, request.url, verify, cert) 06:17:38 url = self.request_url(request, proxies) 06:17:38 self.add_headers( 06:17:38 request, 06:17:38 stream=stream, 06:17:38 timeout=timeout, 06:17:38 verify=verify, 06:17:38 cert=cert, 06:17:38 proxies=proxies, 06:17:38 ) 06:17:38 06:17:38 chunked = not (request.body is None or "Content-Length" in request.headers) 06:17:38 06:17:38 if isinstance(timeout, tuple): 06:17:38 try: 06:17:38 connect, read = timeout 06:17:38 timeout = TimeoutSauce(connect=connect, read=read) 06:17:38 except ValueError: 06:17:38 raise ValueError( 06:17:38 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 06:17:38 f"or a single float to set both timeouts to the same value." 06:17:38 ) 06:17:38 elif isinstance(timeout, TimeoutSauce): 06:17:38 pass 06:17:38 else: 06:17:38 timeout = TimeoutSauce(connect=timeout, read=timeout) 06:17:38 06:17:38 try: 06:17:38 > resp = conn.urlopen( 06:17:38 method=request.method, 06:17:38 url=url, 06:17:38 body=request.body, 06:17:38 headers=request.headers, 06:17:38 redirect=False, 06:17:38 assert_same_host=False, 06:17:38 preload_content=False, 06:17:38 decode_content=False, 06:17:38 retries=self.max_retries, 06:17:38 timeout=timeout, 06:17:38 chunked=chunked, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/adapters.py:667: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 06:17:38 retries = retries.increment( 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 06:17:38 method = 'GET' 06:17:38 url = '/rests/data/transportpce-portmapping:network/nodes=XPDR-OC/node-info' 06:17:38 response = None 06:17:38 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 06:17:38 _pool = 06:17:38 _stacktrace = 06:17:38 06:17:38 def increment( 06:17:38 self, 06:17:38 method: str | None = None, 06:17:38 url: str | None = None, 06:17:38 response: BaseHTTPResponse | None = None, 06:17:38 error: Exception | None = None, 06:17:38 _pool: ConnectionPool | None = None, 06:17:38 _stacktrace: TracebackType | None = None, 06:17:38 ) -> Self: 06:17:38 """Return a new Retry object with incremented retry counters. 06:17:38 06:17:38 :param response: A response object, or None, if the server did not 06:17:38 return a response. 06:17:38 :type response: :class:`~urllib3.response.BaseHTTPResponse` 06:17:38 :param Exception error: An error encountered during the request, or 06:17:38 None if the response was received successfully. 06:17:38 06:17:38 :return: A new ``Retry`` object. 06:17:38 """ 06:17:38 if self.total is False and error: 06:17:38 # Disabled, indicate to re-raise the error. 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 06:17:38 total = self.total 06:17:38 if total is not None: 06:17:38 total -= 1 06:17:38 06:17:38 connect = self.connect 06:17:38 read = self.read 06:17:38 redirect = self.redirect 06:17:38 status_count = self.status 06:17:38 other = self.other 06:17:38 cause = "unknown" 06:17:38 status = None 06:17:38 redirect_location = None 06:17:38 06:17:38 if error and self._is_connection_error(error): 06:17:38 # Connect retry? 06:17:38 if connect is False: 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 elif connect is not None: 06:17:38 connect -= 1 06:17:38 06:17:38 elif error and self._is_read_error(error): 06:17:38 # Read retry? 06:17:38 if read is False or method is None or not self._is_method_retryable(method): 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 elif read is not None: 06:17:38 read -= 1 06:17:38 06:17:38 elif error: 06:17:38 # Other retry? 06:17:38 if other is not None: 06:17:38 other -= 1 06:17:38 06:17:38 elif response and response.get_redirect_location(): 06:17:38 # Redirect retry? 06:17:38 if redirect is not None: 06:17:38 redirect -= 1 06:17:38 cause = "too many redirects" 06:17:38 response_redirect_location = response.get_redirect_location() 06:17:38 if response_redirect_location: 06:17:38 redirect_location = response_redirect_location 06:17:38 status = response.status 06:17:38 06:17:38 else: 06:17:38 # Incrementing because of a server error like a 500 in 06:17:38 # status_forcelist and the given method is in the allowed_methods 06:17:38 cause = ResponseError.GENERIC_ERROR 06:17:38 if response and response.status: 06:17:38 if status_count is not None: 06:17:38 status_count -= 1 06:17:38 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 06:17:38 status = response.status 06:17:38 06:17:38 history = self.history + ( 06:17:38 RequestHistory(method, url, error, status, redirect_location), 06:17:38 ) 06:17:38 06:17:38 new_retry = self.new( 06:17:38 total=total, 06:17:38 connect=connect, 06:17:38 read=read, 06:17:38 redirect=redirect, 06:17:38 status=status_count, 06:17:38 other=other, 06:17:38 history=history, 06:17:38 ) 06:17:38 06:17:38 if new_retry.is_exhausted(): 06:17:38 reason = error or ResponseError(cause) 06:17:38 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8190): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDR-OC/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 06:17:38 06:17:38 During handling of the above exception, another exception occurred: 06:17:38 06:17:38 self = 06:17:38 06:17:38 def test_05_xpdr_portmapping_info(self): 06:17:38 > response = test_utils.get_portmapping_node_attr("XPDR-OC", "node-info", None) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 06:17:38 transportpce_tests/oc/test01_portmapping.py:67: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 transportpce_tests/common/test_utils.py:492: in get_portmapping_node_attr 06:17:38 response = get_request(target_url) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 transportpce_tests/common/test_utils.py:116: in get_request 06:17:38 return requests.request( 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/api.py:59: in request 06:17:38 return session.request(method=method, url=url, **kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/sessions.py:589: in request 06:17:38 resp = self.send(prep, **send_kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/sessions.py:703: in send 06:17:38 r = adapter.send(request, **kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = 06:17:38 request = , stream = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 06:17:38 proxies = OrderedDict() 06:17:38 06:17:38 def send( 06:17:38 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 06:17:38 ): 06:17:38 """Sends PreparedRequest object. Returns Response object. 06:17:38 06:17:38 :param request: The :class:`PreparedRequest ` being sent. 06:17:38 :param stream: (optional) Whether to stream the request content. 06:17:38 :param timeout: (optional) How long to wait for the server to send 06:17:38 data before giving up, as a float, or a :ref:`(connect timeout, 06:17:38 read timeout) ` tuple. 06:17:38 :type timeout: float or tuple or urllib3 Timeout object 06:17:38 :param verify: (optional) Either a boolean, in which case it controls whether 06:17:38 we verify the server's TLS certificate, or a string, in which case it 06:17:38 must be a path to a CA bundle to use 06:17:38 :param cert: (optional) Any user-provided SSL certificate to be trusted. 06:17:38 :param proxies: (optional) The proxies dictionary to apply to the request. 06:17:38 :rtype: requests.Response 06:17:38 """ 06:17:38 06:17:38 try: 06:17:38 conn = self.get_connection_with_tls_context( 06:17:38 request, verify, proxies=proxies, cert=cert 06:17:38 ) 06:17:38 except LocationValueError as e: 06:17:38 raise InvalidURL(e, request=request) 06:17:38 06:17:38 self.cert_verify(conn, request.url, verify, cert) 06:17:38 url = self.request_url(request, proxies) 06:17:38 self.add_headers( 06:17:38 request, 06:17:38 stream=stream, 06:17:38 timeout=timeout, 06:17:38 verify=verify, 06:17:38 cert=cert, 06:17:38 proxies=proxies, 06:17:38 ) 06:17:38 06:17:38 chunked = not (request.body is None or "Content-Length" in request.headers) 06:17:38 06:17:38 if isinstance(timeout, tuple): 06:17:38 try: 06:17:38 connect, read = timeout 06:17:38 timeout = TimeoutSauce(connect=connect, read=read) 06:17:38 except ValueError: 06:17:38 raise ValueError( 06:17:38 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 06:17:38 f"or a single float to set both timeouts to the same value." 06:17:38 ) 06:17:38 elif isinstance(timeout, TimeoutSauce): 06:17:38 pass 06:17:38 else: 06:17:38 timeout = TimeoutSauce(connect=timeout, read=timeout) 06:17:38 06:17:38 try: 06:17:38 resp = conn.urlopen( 06:17:38 method=request.method, 06:17:38 url=url, 06:17:38 body=request.body, 06:17:38 headers=request.headers, 06:17:38 redirect=False, 06:17:38 assert_same_host=False, 06:17:38 preload_content=False, 06:17:38 decode_content=False, 06:17:38 retries=self.max_retries, 06:17:38 timeout=timeout, 06:17:38 chunked=chunked, 06:17:38 ) 06:17:38 06:17:38 except (ProtocolError, OSError) as err: 06:17:38 raise ConnectionError(err, request=request) 06:17:38 06:17:38 except MaxRetryError as e: 06:17:38 if isinstance(e.reason, ConnectTimeoutError): 06:17:38 # TODO: Remove this in 3.0.0: see #2811 06:17:38 if not isinstance(e.reason, NewConnectionError): 06:17:38 raise ConnectTimeout(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, ResponseError): 06:17:38 raise RetryError(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, _ProxyError): 06:17:38 raise ProxyError(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, _SSLError): 06:17:38 # This branch is for urllib3 v1.22 and later. 06:17:38 raise SSLError(e, request=request) 06:17:38 06:17:38 > raise ConnectionError(e, request=request) 06:17:38 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8190): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDR-OC/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 06:17:38 ----------------------------- Captured stdout call ----------------------------- 06:17:38 execution of test_05_xpdr_portmapping_info 06:17:38 ______ TransportpceOCPortMappingTesting.test_06_mpdr_portmapping_NETWORK5 ______ 06:17:38 06:17:38 self = 06:17:38 06:17:38 def _new_conn(self) -> socket.socket: 06:17:38 """Establish a socket connection and set nodelay settings on it. 06:17:38 06:17:38 :return: New socket connection. 06:17:38 """ 06:17:38 try: 06:17:38 > sock = connection.create_connection( 06:17:38 (self._dns_host, self.port), 06:17:38 self.timeout, 06:17:38 source_address=self.source_address, 06:17:38 socket_options=self.socket_options, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:198: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 06:17:38 raise err 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 address = ('localhost', 8190), timeout = 30, source_address = None 06:17:38 socket_options = [(6, 1, 1)] 06:17:38 06:17:38 def create_connection( 06:17:38 address: tuple[str, int], 06:17:38 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 06:17:38 source_address: tuple[str, int] | None = None, 06:17:38 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 06:17:38 ) -> socket.socket: 06:17:38 """Connect to *address* and return the socket object. 06:17:38 06:17:38 Convenience function. Connect to *address* (a 2-tuple ``(host, 06:17:38 port)``) and return the socket object. Passing the optional 06:17:38 *timeout* parameter will set the timeout on the socket instance 06:17:38 before attempting to connect. If no *timeout* is supplied, the 06:17:38 global default timeout setting returned by :func:`socket.getdefaulttimeout` 06:17:38 is used. If *source_address* is set it must be a tuple of (host, port) 06:17:38 for the socket to bind as a source address before making the connection. 06:17:38 An host of '' or port 0 tells the OS to use the default. 06:17:38 """ 06:17:38 06:17:38 host, port = address 06:17:38 if host.startswith("["): 06:17:38 host = host.strip("[]") 06:17:38 err = None 06:17:38 06:17:38 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 06:17:38 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 06:17:38 # The original create_connection function always returns all records. 06:17:38 family = allowed_gai_family() 06:17:38 06:17:38 try: 06:17:38 host.encode("idna") 06:17:38 except UnicodeError: 06:17:38 raise LocationParseError(f"'{host}', label empty or too long") from None 06:17:38 06:17:38 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 06:17:38 af, socktype, proto, canonname, sa = res 06:17:38 sock = None 06:17:38 try: 06:17:38 sock = socket.socket(af, socktype, proto) 06:17:38 06:17:38 # If provided, set socket level options before connecting. 06:17:38 _set_socket_options(sock, socket_options) 06:17:38 06:17:38 if timeout is not _DEFAULT_TIMEOUT: 06:17:38 sock.settimeout(timeout) 06:17:38 if source_address: 06:17:38 sock.bind(source_address) 06:17:38 > sock.connect(sa) 06:17:38 E ConnectionRefusedError: [Errno 111] Connection refused 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 06:17:38 06:17:38 The above exception was the direct cause of the following exception: 06:17:38 06:17:38 self = 06:17:38 method = 'GET' 06:17:38 url = '/rests/data/transportpce-portmapping:network/nodes=XPDR-OC/mapping=XPDR1-NETWORK5' 06:17:38 body = None 06:17:38 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 06:17:38 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 06:17:38 redirect = False, assert_same_host = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 06:17:38 release_conn = False, chunked = False, body_pos = None, preload_content = False 06:17:38 decode_content = False, response_kw = {} 06:17:38 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDR-OC/mapping=XPDR1-NETWORK5', query=None, fragment=None) 06:17:38 destination_scheme = None, conn = None, release_this_conn = True 06:17:38 http_tunnel_required = False, err = None, clean_exit = False 06:17:38 06:17:38 def urlopen( # type: ignore[override] 06:17:38 self, 06:17:38 method: str, 06:17:38 url: str, 06:17:38 body: _TYPE_BODY | None = None, 06:17:38 headers: typing.Mapping[str, str] | None = None, 06:17:38 retries: Retry | bool | int | None = None, 06:17:38 redirect: bool = True, 06:17:38 assert_same_host: bool = True, 06:17:38 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 06:17:38 pool_timeout: int | None = None, 06:17:38 release_conn: bool | None = None, 06:17:38 chunked: bool = False, 06:17:38 body_pos: _TYPE_BODY_POSITION | None = None, 06:17:38 preload_content: bool = True, 06:17:38 decode_content: bool = True, 06:17:38 **response_kw: typing.Any, 06:17:38 ) -> BaseHTTPResponse: 06:17:38 """ 06:17:38 Get a connection from the pool and perform an HTTP request. This is the 06:17:38 lowest level call for making a request, so you'll need to specify all 06:17:38 the raw details. 06:17:38 06:17:38 .. note:: 06:17:38 06:17:38 More commonly, it's appropriate to use a convenience method 06:17:38 such as :meth:`request`. 06:17:38 06:17:38 .. note:: 06:17:38 06:17:38 `release_conn` will only behave as expected if 06:17:38 `preload_content=False` because we want to make 06:17:38 `preload_content=False` the default behaviour someday soon without 06:17:38 breaking backwards compatibility. 06:17:38 06:17:38 :param method: 06:17:38 HTTP request method (such as GET, POST, PUT, etc.) 06:17:38 06:17:38 :param url: 06:17:38 The URL to perform the request on. 06:17:38 06:17:38 :param body: 06:17:38 Data to send in the request body, either :class:`str`, :class:`bytes`, 06:17:38 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 06:17:38 06:17:38 :param headers: 06:17:38 Dictionary of custom headers to send, such as User-Agent, 06:17:38 If-None-Match, etc. If None, pool headers are used. If provided, 06:17:38 these headers completely replace any pool-specific headers. 06:17:38 06:17:38 :param retries: 06:17:38 Configure the number of retries to allow before raising a 06:17:38 :class:`~urllib3.exceptions.MaxRetryError` exception. 06:17:38 06:17:38 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 06:17:38 :class:`~urllib3.util.retry.Retry` object for fine-grained control 06:17:38 over different types of retries. 06:17:38 Pass an integer number to retry connection errors that many times, 06:17:38 but no other types of errors. Pass zero to never retry. 06:17:38 06:17:38 If ``False``, then retries are disabled and any exception is raised 06:17:38 immediately. Also, instead of raising a MaxRetryError on redirects, 06:17:38 the redirect response will be returned. 06:17:38 06:17:38 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 06:17:38 06:17:38 :param redirect: 06:17:38 If True, automatically handle redirects (status codes 301, 302, 06:17:38 303, 307, 308). Each redirect counts as a retry. Disabling retries 06:17:38 will disable redirect, too. 06:17:38 06:17:38 :param assert_same_host: 06:17:38 If ``True``, will make sure that the host of the pool requests is 06:17:38 consistent else will raise HostChangedError. When ``False``, you can 06:17:38 use the pool on an HTTP proxy and request foreign hosts. 06:17:38 06:17:38 :param timeout: 06:17:38 If specified, overrides the default timeout for this one 06:17:38 request. It may be a float (in seconds) or an instance of 06:17:38 :class:`urllib3.util.Timeout`. 06:17:38 06:17:38 :param pool_timeout: 06:17:38 If set and the pool is set to block=True, then this method will 06:17:38 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 06:17:38 connection is available within the time period. 06:17:38 06:17:38 :param bool preload_content: 06:17:38 If True, the response's body will be preloaded into memory. 06:17:38 06:17:38 :param bool decode_content: 06:17:38 If True, will attempt to decode the body based on the 06:17:38 'content-encoding' header. 06:17:38 06:17:38 :param release_conn: 06:17:38 If False, then the urlopen call will not release the connection 06:17:38 back into the pool once a response is received (but will release if 06:17:38 you read the entire contents of the response such as when 06:17:38 `preload_content=True`). This is useful if you're not preloading 06:17:38 the response's content immediately. You will need to call 06:17:38 ``r.release_conn()`` on the response ``r`` to return the connection 06:17:38 back into the pool. If None, it takes the value of ``preload_content`` 06:17:38 which defaults to ``True``. 06:17:38 06:17:38 :param bool chunked: 06:17:38 If True, urllib3 will send the body using chunked transfer 06:17:38 encoding. Otherwise, urllib3 will send the body using the standard 06:17:38 content-length form. Defaults to False. 06:17:38 06:17:38 :param int body_pos: 06:17:38 Position to seek to in file-like body in the event of a retry or 06:17:38 redirect. Typically this won't need to be set because urllib3 will 06:17:38 auto-populate the value when needed. 06:17:38 """ 06:17:38 parsed_url = parse_url(url) 06:17:38 destination_scheme = parsed_url.scheme 06:17:38 06:17:38 if headers is None: 06:17:38 headers = self.headers 06:17:38 06:17:38 if not isinstance(retries, Retry): 06:17:38 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 06:17:38 06:17:38 if release_conn is None: 06:17:38 release_conn = preload_content 06:17:38 06:17:38 # Check host 06:17:38 if assert_same_host and not self.is_same_host(url): 06:17:38 raise HostChangedError(self, url, retries) 06:17:38 06:17:38 # Ensure that the URL we're connecting to is properly encoded 06:17:38 if url.startswith("/"): 06:17:38 url = to_str(_encode_target(url)) 06:17:38 else: 06:17:38 url = to_str(parsed_url.url) 06:17:38 06:17:38 conn = None 06:17:38 06:17:38 # Track whether `conn` needs to be released before 06:17:38 # returning/raising/recursing. Update this variable if necessary, and 06:17:38 # leave `release_conn` constant throughout the function. That way, if 06:17:38 # the function recurses, the original value of `release_conn` will be 06:17:38 # passed down into the recursive call, and its value will be respected. 06:17:38 # 06:17:38 # See issue #651 [1] for details. 06:17:38 # 06:17:38 # [1] 06:17:38 release_this_conn = release_conn 06:17:38 06:17:38 http_tunnel_required = connection_requires_http_tunnel( 06:17:38 self.proxy, self.proxy_config, destination_scheme 06:17:38 ) 06:17:38 06:17:38 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 06:17:38 # have to copy the headers dict so we can safely change it without those 06:17:38 # changes being reflected in anyone else's copy. 06:17:38 if not http_tunnel_required: 06:17:38 headers = headers.copy() # type: ignore[attr-defined] 06:17:38 headers.update(self.proxy_headers) # type: ignore[union-attr] 06:17:38 06:17:38 # Must keep the exception bound to a separate variable or else Python 3 06:17:38 # complains about UnboundLocalError. 06:17:38 err = None 06:17:38 06:17:38 # Keep track of whether we cleanly exited the except block. This 06:17:38 # ensures we do proper cleanup in finally. 06:17:38 clean_exit = False 06:17:38 06:17:38 # Rewind body position, if needed. Record current position 06:17:38 # for future rewinds in the event of a redirect/retry. 06:17:38 body_pos = set_file_position(body, body_pos) 06:17:38 06:17:38 try: 06:17:38 # Request a connection from the queue. 06:17:38 timeout_obj = self._get_timeout(timeout) 06:17:38 conn = self._get_conn(timeout=pool_timeout) 06:17:38 06:17:38 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 06:17:38 06:17:38 # Is this a closed/new connection that requires CONNECT tunnelling? 06:17:38 if self.proxy is not None and http_tunnel_required and conn.is_closed: 06:17:38 try: 06:17:38 self._prepare_proxy(conn) 06:17:38 except (BaseSSLError, OSError, SocketTimeout) as e: 06:17:38 self._raise_timeout( 06:17:38 err=e, url=self.proxy.url, timeout_value=conn.timeout 06:17:38 ) 06:17:38 raise 06:17:38 06:17:38 # If we're going to release the connection in ``finally:``, then 06:17:38 # the response doesn't need to know about the connection. Otherwise 06:17:38 # it will also try to release it and we'll have a double-release 06:17:38 # mess. 06:17:38 response_conn = conn if not release_conn else None 06:17:38 06:17:38 # Make the request on the HTTPConnection object 06:17:38 > response = self._make_request( 06:17:38 conn, 06:17:38 method, 06:17:38 url, 06:17:38 timeout=timeout_obj, 06:17:38 body=body, 06:17:38 headers=headers, 06:17:38 chunked=chunked, 06:17:38 retries=retries, 06:17:38 response_conn=response_conn, 06:17:38 preload_content=preload_content, 06:17:38 decode_content=decode_content, 06:17:38 **response_kw, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 06:17:38 conn.request( 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:494: in request 06:17:38 self.endheaders() 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 06:17:38 self._send_output(message_body, encode_chunked=encode_chunked) 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 06:17:38 self.send(msg) 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 06:17:38 self.connect() 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 06:17:38 self.sock = self._new_conn() 06:17:38 ^^^^^^^^^^^^^^^^ 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = 06:17:38 06:17:38 def _new_conn(self) -> socket.socket: 06:17:38 """Establish a socket connection and set nodelay settings on it. 06:17:38 06:17:38 :return: New socket connection. 06:17:38 """ 06:17:38 try: 06:17:38 sock = connection.create_connection( 06:17:38 (self._dns_host, self.port), 06:17:38 self.timeout, 06:17:38 source_address=self.source_address, 06:17:38 socket_options=self.socket_options, 06:17:38 ) 06:17:38 except socket.gaierror as e: 06:17:38 raise NameResolutionError(self.host, self, e) from e 06:17:38 except SocketTimeout as e: 06:17:38 raise ConnectTimeoutError( 06:17:38 self, 06:17:38 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 06:17:38 ) from e 06:17:38 06:17:38 except OSError as e: 06:17:38 > raise NewConnectionError( 06:17:38 self, f"Failed to establish a new connection: {e}" 06:17:38 ) from e 06:17:38 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 06:17:38 06:17:38 The above exception was the direct cause of the following exception: 06:17:38 06:17:38 self = 06:17:38 request = , stream = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 06:17:38 proxies = OrderedDict() 06:17:38 06:17:38 def send( 06:17:38 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 06:17:38 ): 06:17:38 """Sends PreparedRequest object. Returns Response object. 06:17:38 06:17:38 :param request: The :class:`PreparedRequest ` being sent. 06:17:38 :param stream: (optional) Whether to stream the request content. 06:17:38 :param timeout: (optional) How long to wait for the server to send 06:17:38 data before giving up, as a float, or a :ref:`(connect timeout, 06:17:38 read timeout) ` tuple. 06:17:38 :type timeout: float or tuple or urllib3 Timeout object 06:17:38 :param verify: (optional) Either a boolean, in which case it controls whether 06:17:38 we verify the server's TLS certificate, or a string, in which case it 06:17:38 must be a path to a CA bundle to use 06:17:38 :param cert: (optional) Any user-provided SSL certificate to be trusted. 06:17:38 :param proxies: (optional) The proxies dictionary to apply to the request. 06:17:38 :rtype: requests.Response 06:17:38 """ 06:17:38 06:17:38 try: 06:17:38 conn = self.get_connection_with_tls_context( 06:17:38 request, verify, proxies=proxies, cert=cert 06:17:38 ) 06:17:38 except LocationValueError as e: 06:17:38 raise InvalidURL(e, request=request) 06:17:38 06:17:38 self.cert_verify(conn, request.url, verify, cert) 06:17:38 url = self.request_url(request, proxies) 06:17:38 self.add_headers( 06:17:38 request, 06:17:38 stream=stream, 06:17:38 timeout=timeout, 06:17:38 verify=verify, 06:17:38 cert=cert, 06:17:38 proxies=proxies, 06:17:38 ) 06:17:38 06:17:38 chunked = not (request.body is None or "Content-Length" in request.headers) 06:17:38 06:17:38 if isinstance(timeout, tuple): 06:17:38 try: 06:17:38 connect, read = timeout 06:17:38 timeout = TimeoutSauce(connect=connect, read=read) 06:17:38 except ValueError: 06:17:38 raise ValueError( 06:17:38 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 06:17:38 f"or a single float to set both timeouts to the same value." 06:17:38 ) 06:17:38 elif isinstance(timeout, TimeoutSauce): 06:17:38 pass 06:17:38 else: 06:17:38 timeout = TimeoutSauce(connect=timeout, read=timeout) 06:17:38 06:17:38 try: 06:17:38 > resp = conn.urlopen( 06:17:38 method=request.method, 06:17:38 url=url, 06:17:38 body=request.body, 06:17:38 headers=request.headers, 06:17:38 redirect=False, 06:17:38 assert_same_host=False, 06:17:38 preload_content=False, 06:17:38 decode_content=False, 06:17:38 retries=self.max_retries, 06:17:38 timeout=timeout, 06:17:38 chunked=chunked, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/adapters.py:667: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 06:17:38 retries = retries.increment( 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 06:17:38 method = 'GET' 06:17:38 url = '/rests/data/transportpce-portmapping:network/nodes=XPDR-OC/mapping=XPDR1-NETWORK5' 06:17:38 response = None 06:17:38 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 06:17:38 _pool = 06:17:38 _stacktrace = 06:17:38 06:17:38 def increment( 06:17:38 self, 06:17:38 method: str | None = None, 06:17:38 url: str | None = None, 06:17:38 response: BaseHTTPResponse | None = None, 06:17:38 error: Exception | None = None, 06:17:38 _pool: ConnectionPool | None = None, 06:17:38 _stacktrace: TracebackType | None = None, 06:17:38 ) -> Self: 06:17:38 """Return a new Retry object with incremented retry counters. 06:17:38 06:17:38 :param response: A response object, or None, if the server did not 06:17:38 return a response. 06:17:38 :type response: :class:`~urllib3.response.BaseHTTPResponse` 06:17:38 :param Exception error: An error encountered during the request, or 06:17:38 None if the response was received successfully. 06:17:38 06:17:38 :return: A new ``Retry`` object. 06:17:38 """ 06:17:38 if self.total is False and error: 06:17:38 # Disabled, indicate to re-raise the error. 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 06:17:38 total = self.total 06:17:38 if total is not None: 06:17:38 total -= 1 06:17:38 06:17:38 connect = self.connect 06:17:38 read = self.read 06:17:38 redirect = self.redirect 06:17:38 status_count = self.status 06:17:38 other = self.other 06:17:38 cause = "unknown" 06:17:38 status = None 06:17:38 redirect_location = None 06:17:38 06:17:38 if error and self._is_connection_error(error): 06:17:38 # Connect retry? 06:17:38 if connect is False: 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 elif connect is not None: 06:17:38 connect -= 1 06:17:38 06:17:38 elif error and self._is_read_error(error): 06:17:38 # Read retry? 06:17:38 if read is False or method is None or not self._is_method_retryable(method): 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 elif read is not None: 06:17:38 read -= 1 06:17:38 06:17:38 elif error: 06:17:38 # Other retry? 06:17:38 if other is not None: 06:17:38 other -= 1 06:17:38 06:17:38 elif response and response.get_redirect_location(): 06:17:38 # Redirect retry? 06:17:38 if redirect is not None: 06:17:38 redirect -= 1 06:17:38 cause = "too many redirects" 06:17:38 response_redirect_location = response.get_redirect_location() 06:17:38 if response_redirect_location: 06:17:38 redirect_location = response_redirect_location 06:17:38 status = response.status 06:17:38 06:17:38 else: 06:17:38 # Incrementing because of a server error like a 500 in 06:17:38 # status_forcelist and the given method is in the allowed_methods 06:17:38 cause = ResponseError.GENERIC_ERROR 06:17:38 if response and response.status: 06:17:38 if status_count is not None: 06:17:38 status_count -= 1 06:17:38 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 06:17:38 status = response.status 06:17:38 06:17:38 history = self.history + ( 06:17:38 RequestHistory(method, url, error, status, redirect_location), 06:17:38 ) 06:17:38 06:17:38 new_retry = self.new( 06:17:38 total=total, 06:17:38 connect=connect, 06:17:38 read=read, 06:17:38 redirect=redirect, 06:17:38 status=status_count, 06:17:38 other=other, 06:17:38 history=history, 06:17:38 ) 06:17:38 06:17:38 if new_retry.is_exhausted(): 06:17:38 reason = error or ResponseError(cause) 06:17:38 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8190): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDR-OC/mapping=XPDR1-NETWORK5 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 06:17:38 06:17:38 During handling of the above exception, another exception occurred: 06:17:38 06:17:38 self = 06:17:38 06:17:38 def test_06_mpdr_portmapping_NETWORK5(self): 06:17:38 > response = test_utils.get_portmapping_node_attr("XPDR-OC", "mapping", "XPDR1-NETWORK5") 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 06:17:38 transportpce_tests/oc/test01_portmapping.py:80: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 transportpce_tests/common/test_utils.py:492: in get_portmapping_node_attr 06:17:38 response = get_request(target_url) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 transportpce_tests/common/test_utils.py:116: in get_request 06:17:38 return requests.request( 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/api.py:59: in request 06:17:38 return session.request(method=method, url=url, **kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/sessions.py:589: in request 06:17:38 resp = self.send(prep, **send_kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/sessions.py:703: in send 06:17:38 r = adapter.send(request, **kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = 06:17:38 request = , stream = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 06:17:38 proxies = OrderedDict() 06:17:38 06:17:38 def send( 06:17:38 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 06:17:38 ): 06:17:38 """Sends PreparedRequest object. Returns Response object. 06:17:38 06:17:38 :param request: The :class:`PreparedRequest ` being sent. 06:17:38 :param stream: (optional) Whether to stream the request content. 06:17:38 :param timeout: (optional) How long to wait for the server to send 06:17:38 data before giving up, as a float, or a :ref:`(connect timeout, 06:17:38 read timeout) ` tuple. 06:17:38 :type timeout: float or tuple or urllib3 Timeout object 06:17:38 :param verify: (optional) Either a boolean, in which case it controls whether 06:17:38 we verify the server's TLS certificate, or a string, in which case it 06:17:38 must be a path to a CA bundle to use 06:17:38 :param cert: (optional) Any user-provided SSL certificate to be trusted. 06:17:38 :param proxies: (optional) The proxies dictionary to apply to the request. 06:17:38 :rtype: requests.Response 06:17:38 """ 06:17:38 06:17:38 try: 06:17:38 conn = self.get_connection_with_tls_context( 06:17:38 request, verify, proxies=proxies, cert=cert 06:17:38 ) 06:17:38 except LocationValueError as e: 06:17:38 raise InvalidURL(e, request=request) 06:17:38 06:17:38 self.cert_verify(conn, request.url, verify, cert) 06:17:38 url = self.request_url(request, proxies) 06:17:38 self.add_headers( 06:17:38 request, 06:17:38 stream=stream, 06:17:38 timeout=timeout, 06:17:38 verify=verify, 06:17:38 cert=cert, 06:17:38 proxies=proxies, 06:17:38 ) 06:17:38 06:17:38 chunked = not (request.body is None or "Content-Length" in request.headers) 06:17:38 06:17:38 if isinstance(timeout, tuple): 06:17:38 try: 06:17:38 connect, read = timeout 06:17:38 timeout = TimeoutSauce(connect=connect, read=read) 06:17:38 except ValueError: 06:17:38 raise ValueError( 06:17:38 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 06:17:38 f"or a single float to set both timeouts to the same value." 06:17:38 ) 06:17:38 elif isinstance(timeout, TimeoutSauce): 06:17:38 pass 06:17:38 else: 06:17:38 timeout = TimeoutSauce(connect=timeout, read=timeout) 06:17:38 06:17:38 try: 06:17:38 resp = conn.urlopen( 06:17:38 method=request.method, 06:17:38 url=url, 06:17:38 body=request.body, 06:17:38 headers=request.headers, 06:17:38 redirect=False, 06:17:38 assert_same_host=False, 06:17:38 preload_content=False, 06:17:38 decode_content=False, 06:17:38 retries=self.max_retries, 06:17:38 timeout=timeout, 06:17:38 chunked=chunked, 06:17:38 ) 06:17:38 06:17:38 except (ProtocolError, OSError) as err: 06:17:38 raise ConnectionError(err, request=request) 06:17:38 06:17:38 except MaxRetryError as e: 06:17:38 if isinstance(e.reason, ConnectTimeoutError): 06:17:38 # TODO: Remove this in 3.0.0: see #2811 06:17:38 if not isinstance(e.reason, NewConnectionError): 06:17:38 raise ConnectTimeout(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, ResponseError): 06:17:38 raise RetryError(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, _ProxyError): 06:17:38 raise ProxyError(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, _SSLError): 06:17:38 # This branch is for urllib3 v1.22 and later. 06:17:38 raise SSLError(e, request=request) 06:17:38 06:17:38 > raise ConnectionError(e, request=request) 06:17:38 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8190): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDR-OC/mapping=XPDR1-NETWORK5 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 06:17:38 ----------------------------- Captured stdout call ----------------------------- 06:17:38 execution of test_06_mpdr_portmapping_NETWORK5 06:17:38 ______ TransportpceOCPortMappingTesting.test_07_mpdr_portmapping_CLIENT1 _______ 06:17:38 06:17:38 self = 06:17:38 06:17:38 def _new_conn(self) -> socket.socket: 06:17:38 """Establish a socket connection and set nodelay settings on it. 06:17:38 06:17:38 :return: New socket connection. 06:17:38 """ 06:17:38 try: 06:17:38 > sock = connection.create_connection( 06:17:38 (self._dns_host, self.port), 06:17:38 self.timeout, 06:17:38 source_address=self.source_address, 06:17:38 socket_options=self.socket_options, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:198: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 06:17:38 raise err 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 address = ('localhost', 8190), timeout = 30, source_address = None 06:17:38 socket_options = [(6, 1, 1)] 06:17:38 06:17:38 def create_connection( 06:17:38 address: tuple[str, int], 06:17:38 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 06:17:38 source_address: tuple[str, int] | None = None, 06:17:38 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 06:17:38 ) -> socket.socket: 06:17:38 """Connect to *address* and return the socket object. 06:17:38 06:17:38 Convenience function. Connect to *address* (a 2-tuple ``(host, 06:17:38 port)``) and return the socket object. Passing the optional 06:17:38 *timeout* parameter will set the timeout on the socket instance 06:17:38 before attempting to connect. If no *timeout* is supplied, the 06:17:38 global default timeout setting returned by :func:`socket.getdefaulttimeout` 06:17:38 is used. If *source_address* is set it must be a tuple of (host, port) 06:17:38 for the socket to bind as a source address before making the connection. 06:17:38 An host of '' or port 0 tells the OS to use the default. 06:17:38 """ 06:17:38 06:17:38 host, port = address 06:17:38 if host.startswith("["): 06:17:38 host = host.strip("[]") 06:17:38 err = None 06:17:38 06:17:38 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 06:17:38 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 06:17:38 # The original create_connection function always returns all records. 06:17:38 family = allowed_gai_family() 06:17:38 06:17:38 try: 06:17:38 host.encode("idna") 06:17:38 except UnicodeError: 06:17:38 raise LocationParseError(f"'{host}', label empty or too long") from None 06:17:38 06:17:38 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 06:17:38 af, socktype, proto, canonname, sa = res 06:17:38 sock = None 06:17:38 try: 06:17:38 sock = socket.socket(af, socktype, proto) 06:17:38 06:17:38 # If provided, set socket level options before connecting. 06:17:38 _set_socket_options(sock, socket_options) 06:17:38 06:17:38 if timeout is not _DEFAULT_TIMEOUT: 06:17:38 sock.settimeout(timeout) 06:17:38 if source_address: 06:17:38 sock.bind(source_address) 06:17:38 > sock.connect(sa) 06:17:38 E ConnectionRefusedError: [Errno 111] Connection refused 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 06:17:38 06:17:38 The above exception was the direct cause of the following exception: 06:17:38 06:17:38 self = 06:17:38 method = 'GET' 06:17:38 url = '/rests/data/transportpce-portmapping:network/nodes=XPDR-OC/mapping=XPDR1-CLIENT1' 06:17:38 body = None 06:17:38 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 06:17:38 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 06:17:38 redirect = False, assert_same_host = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 06:17:38 release_conn = False, chunked = False, body_pos = None, preload_content = False 06:17:38 decode_content = False, response_kw = {} 06:17:38 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDR-OC/mapping=XPDR1-CLIENT1', query=None, fragment=None) 06:17:38 destination_scheme = None, conn = None, release_this_conn = True 06:17:38 http_tunnel_required = False, err = None, clean_exit = False 06:17:38 06:17:38 def urlopen( # type: ignore[override] 06:17:38 self, 06:17:38 method: str, 06:17:38 url: str, 06:17:38 body: _TYPE_BODY | None = None, 06:17:38 headers: typing.Mapping[str, str] | None = None, 06:17:38 retries: Retry | bool | int | None = None, 06:17:38 redirect: bool = True, 06:17:38 assert_same_host: bool = True, 06:17:38 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 06:17:38 pool_timeout: int | None = None, 06:17:38 release_conn: bool | None = None, 06:17:38 chunked: bool = False, 06:17:38 body_pos: _TYPE_BODY_POSITION | None = None, 06:17:38 preload_content: bool = True, 06:17:38 decode_content: bool = True, 06:17:38 **response_kw: typing.Any, 06:17:38 ) -> BaseHTTPResponse: 06:17:38 """ 06:17:38 Get a connection from the pool and perform an HTTP request. This is the 06:17:38 lowest level call for making a request, so you'll need to specify all 06:17:38 the raw details. 06:17:38 06:17:38 .. note:: 06:17:38 06:17:38 More commonly, it's appropriate to use a convenience method 06:17:38 such as :meth:`request`. 06:17:38 06:17:38 .. note:: 06:17:38 06:17:38 `release_conn` will only behave as expected if 06:17:38 `preload_content=False` because we want to make 06:17:38 `preload_content=False` the default behaviour someday soon without 06:17:38 breaking backwards compatibility. 06:17:38 06:17:38 :param method: 06:17:38 HTTP request method (such as GET, POST, PUT, etc.) 06:17:38 06:17:38 :param url: 06:17:38 The URL to perform the request on. 06:17:38 06:17:38 :param body: 06:17:38 Data to send in the request body, either :class:`str`, :class:`bytes`, 06:17:38 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 06:17:38 06:17:38 :param headers: 06:17:38 Dictionary of custom headers to send, such as User-Agent, 06:17:38 If-None-Match, etc. If None, pool headers are used. If provided, 06:17:38 these headers completely replace any pool-specific headers. 06:17:38 06:17:38 :param retries: 06:17:38 Configure the number of retries to allow before raising a 06:17:38 :class:`~urllib3.exceptions.MaxRetryError` exception. 06:17:38 06:17:38 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 06:17:38 :class:`~urllib3.util.retry.Retry` object for fine-grained control 06:17:38 over different types of retries. 06:17:38 Pass an integer number to retry connection errors that many times, 06:17:38 but no other types of errors. Pass zero to never retry. 06:17:38 06:17:38 If ``False``, then retries are disabled and any exception is raised 06:17:38 immediately. Also, instead of raising a MaxRetryError on redirects, 06:17:38 the redirect response will be returned. 06:17:38 06:17:38 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 06:17:38 06:17:38 :param redirect: 06:17:38 If True, automatically handle redirects (status codes 301, 302, 06:17:38 303, 307, 308). Each redirect counts as a retry. Disabling retries 06:17:38 will disable redirect, too. 06:17:38 06:17:38 :param assert_same_host: 06:17:38 If ``True``, will make sure that the host of the pool requests is 06:17:38 consistent else will raise HostChangedError. When ``False``, you can 06:17:38 use the pool on an HTTP proxy and request foreign hosts. 06:17:38 06:17:38 :param timeout: 06:17:38 If specified, overrides the default timeout for this one 06:17:38 request. It may be a float (in seconds) or an instance of 06:17:38 :class:`urllib3.util.Timeout`. 06:17:38 06:17:38 :param pool_timeout: 06:17:38 If set and the pool is set to block=True, then this method will 06:17:38 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 06:17:38 connection is available within the time period. 06:17:38 06:17:38 :param bool preload_content: 06:17:38 If True, the response's body will be preloaded into memory. 06:17:38 06:17:38 :param bool decode_content: 06:17:38 If True, will attempt to decode the body based on the 06:17:38 'content-encoding' header. 06:17:38 06:17:38 :param release_conn: 06:17:38 If False, then the urlopen call will not release the connection 06:17:38 back into the pool once a response is received (but will release if 06:17:38 you read the entire contents of the response such as when 06:17:38 `preload_content=True`). This is useful if you're not preloading 06:17:38 the response's content immediately. You will need to call 06:17:38 ``r.release_conn()`` on the response ``r`` to return the connection 06:17:38 back into the pool. If None, it takes the value of ``preload_content`` 06:17:38 which defaults to ``True``. 06:17:38 06:17:38 :param bool chunked: 06:17:38 If True, urllib3 will send the body using chunked transfer 06:17:38 encoding. Otherwise, urllib3 will send the body using the standard 06:17:38 content-length form. Defaults to False. 06:17:38 06:17:38 :param int body_pos: 06:17:38 Position to seek to in file-like body in the event of a retry or 06:17:38 redirect. Typically this won't need to be set because urllib3 will 06:17:38 auto-populate the value when needed. 06:17:38 """ 06:17:38 parsed_url = parse_url(url) 06:17:38 destination_scheme = parsed_url.scheme 06:17:38 06:17:38 if headers is None: 06:17:38 headers = self.headers 06:17:38 06:17:38 if not isinstance(retries, Retry): 06:17:38 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 06:17:38 06:17:38 if release_conn is None: 06:17:38 release_conn = preload_content 06:17:38 06:17:38 # Check host 06:17:38 if assert_same_host and not self.is_same_host(url): 06:17:38 raise HostChangedError(self, url, retries) 06:17:38 06:17:38 # Ensure that the URL we're connecting to is properly encoded 06:17:38 if url.startswith("/"): 06:17:38 url = to_str(_encode_target(url)) 06:17:38 else: 06:17:38 url = to_str(parsed_url.url) 06:17:38 06:17:38 conn = None 06:17:38 06:17:38 # Track whether `conn` needs to be released before 06:17:38 # returning/raising/recursing. Update this variable if necessary, and 06:17:38 # leave `release_conn` constant throughout the function. That way, if 06:17:38 # the function recurses, the original value of `release_conn` will be 06:17:38 # passed down into the recursive call, and its value will be respected. 06:17:38 # 06:17:38 # See issue #651 [1] for details. 06:17:38 # 06:17:38 # [1] 06:17:38 release_this_conn = release_conn 06:17:38 06:17:38 http_tunnel_required = connection_requires_http_tunnel( 06:17:38 self.proxy, self.proxy_config, destination_scheme 06:17:38 ) 06:17:38 06:17:38 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 06:17:38 # have to copy the headers dict so we can safely change it without those 06:17:38 # changes being reflected in anyone else's copy. 06:17:38 if not http_tunnel_required: 06:17:38 headers = headers.copy() # type: ignore[attr-defined] 06:17:38 headers.update(self.proxy_headers) # type: ignore[union-attr] 06:17:38 06:17:38 # Must keep the exception bound to a separate variable or else Python 3 06:17:38 # complains about UnboundLocalError. 06:17:38 err = None 06:17:38 06:17:38 # Keep track of whether we cleanly exited the except block. This 06:17:38 # ensures we do proper cleanup in finally. 06:17:38 clean_exit = False 06:17:38 06:17:38 # Rewind body position, if needed. Record current position 06:17:38 # for future rewinds in the event of a redirect/retry. 06:17:38 body_pos = set_file_position(body, body_pos) 06:17:38 06:17:38 try: 06:17:38 # Request a connection from the queue. 06:17:38 timeout_obj = self._get_timeout(timeout) 06:17:38 conn = self._get_conn(timeout=pool_timeout) 06:17:38 06:17:38 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 06:17:38 06:17:38 # Is this a closed/new connection that requires CONNECT tunnelling? 06:17:38 if self.proxy is not None and http_tunnel_required and conn.is_closed: 06:17:38 try: 06:17:38 self._prepare_proxy(conn) 06:17:38 except (BaseSSLError, OSError, SocketTimeout) as e: 06:17:38 self._raise_timeout( 06:17:38 err=e, url=self.proxy.url, timeout_value=conn.timeout 06:17:38 ) 06:17:38 raise 06:17:38 06:17:38 # If we're going to release the connection in ``finally:``, then 06:17:38 # the response doesn't need to know about the connection. Otherwise 06:17:38 # it will also try to release it and we'll have a double-release 06:17:38 # mess. 06:17:38 response_conn = conn if not release_conn else None 06:17:38 06:17:38 # Make the request on the HTTPConnection object 06:17:38 > response = self._make_request( 06:17:38 conn, 06:17:38 method, 06:17:38 url, 06:17:38 timeout=timeout_obj, 06:17:38 body=body, 06:17:38 headers=headers, 06:17:38 chunked=chunked, 06:17:38 retries=retries, 06:17:38 response_conn=response_conn, 06:17:38 preload_content=preload_content, 06:17:38 decode_content=decode_content, 06:17:38 **response_kw, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 06:17:38 conn.request( 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:494: in request 06:17:38 self.endheaders() 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 06:17:38 self._send_output(message_body, encode_chunked=encode_chunked) 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 06:17:38 self.send(msg) 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 06:17:38 self.connect() 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 06:17:38 self.sock = self._new_conn() 06:17:38 ^^^^^^^^^^^^^^^^ 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = 06:17:38 06:17:38 def _new_conn(self) -> socket.socket: 06:17:38 """Establish a socket connection and set nodelay settings on it. 06:17:38 06:17:38 :return: New socket connection. 06:17:38 """ 06:17:38 try: 06:17:38 sock = connection.create_connection( 06:17:38 (self._dns_host, self.port), 06:17:38 self.timeout, 06:17:38 source_address=self.source_address, 06:17:38 socket_options=self.socket_options, 06:17:38 ) 06:17:38 except socket.gaierror as e: 06:17:38 raise NameResolutionError(self.host, self, e) from e 06:17:38 except SocketTimeout as e: 06:17:38 raise ConnectTimeoutError( 06:17:38 self, 06:17:38 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 06:17:38 ) from e 06:17:38 06:17:38 except OSError as e: 06:17:38 > raise NewConnectionError( 06:17:38 self, f"Failed to establish a new connection: {e}" 06:17:38 ) from e 06:17:38 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 06:17:38 06:17:38 The above exception was the direct cause of the following exception: 06:17:38 06:17:38 self = 06:17:38 request = , stream = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 06:17:38 proxies = OrderedDict() 06:17:38 06:17:38 def send( 06:17:38 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 06:17:38 ): 06:17:38 """Sends PreparedRequest object. Returns Response object. 06:17:38 06:17:38 :param request: The :class:`PreparedRequest ` being sent. 06:17:38 :param stream: (optional) Whether to stream the request content. 06:17:38 :param timeout: (optional) How long to wait for the server to send 06:17:38 data before giving up, as a float, or a :ref:`(connect timeout, 06:17:38 read timeout) ` tuple. 06:17:38 :type timeout: float or tuple or urllib3 Timeout object 06:17:38 :param verify: (optional) Either a boolean, in which case it controls whether 06:17:38 we verify the server's TLS certificate, or a string, in which case it 06:17:38 must be a path to a CA bundle to use 06:17:38 :param cert: (optional) Any user-provided SSL certificate to be trusted. 06:17:38 :param proxies: (optional) The proxies dictionary to apply to the request. 06:17:38 :rtype: requests.Response 06:17:38 """ 06:17:38 06:17:38 try: 06:17:38 conn = self.get_connection_with_tls_context( 06:17:38 request, verify, proxies=proxies, cert=cert 06:17:38 ) 06:17:38 except LocationValueError as e: 06:17:38 raise InvalidURL(e, request=request) 06:17:38 06:17:38 self.cert_verify(conn, request.url, verify, cert) 06:17:38 url = self.request_url(request, proxies) 06:17:38 self.add_headers( 06:17:38 request, 06:17:38 stream=stream, 06:17:38 timeout=timeout, 06:17:38 verify=verify, 06:17:38 cert=cert, 06:17:38 proxies=proxies, 06:17:38 ) 06:17:38 06:17:38 chunked = not (request.body is None or "Content-Length" in request.headers) 06:17:38 06:17:38 if isinstance(timeout, tuple): 06:17:38 try: 06:17:38 connect, read = timeout 06:17:38 timeout = TimeoutSauce(connect=connect, read=read) 06:17:38 except ValueError: 06:17:38 raise ValueError( 06:17:38 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 06:17:38 f"or a single float to set both timeouts to the same value." 06:17:38 ) 06:17:38 elif isinstance(timeout, TimeoutSauce): 06:17:38 pass 06:17:38 else: 06:17:38 timeout = TimeoutSauce(connect=timeout, read=timeout) 06:17:38 06:17:38 try: 06:17:38 > resp = conn.urlopen( 06:17:38 method=request.method, 06:17:38 url=url, 06:17:38 body=request.body, 06:17:38 headers=request.headers, 06:17:38 redirect=False, 06:17:38 assert_same_host=False, 06:17:38 preload_content=False, 06:17:38 decode_content=False, 06:17:38 retries=self.max_retries, 06:17:38 timeout=timeout, 06:17:38 chunked=chunked, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/adapters.py:667: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 06:17:38 retries = retries.increment( 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 06:17:38 method = 'GET' 06:17:38 url = '/rests/data/transportpce-portmapping:network/nodes=XPDR-OC/mapping=XPDR1-CLIENT1' 06:17:38 response = None 06:17:38 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 06:17:38 _pool = 06:17:38 _stacktrace = 06:17:38 06:17:38 def increment( 06:17:38 self, 06:17:38 method: str | None = None, 06:17:38 url: str | None = None, 06:17:38 response: BaseHTTPResponse | None = None, 06:17:38 error: Exception | None = None, 06:17:38 _pool: ConnectionPool | None = None, 06:17:38 _stacktrace: TracebackType | None = None, 06:17:38 ) -> Self: 06:17:38 """Return a new Retry object with incremented retry counters. 06:17:38 06:17:38 :param response: A response object, or None, if the server did not 06:17:38 return a response. 06:17:38 :type response: :class:`~urllib3.response.BaseHTTPResponse` 06:17:38 :param Exception error: An error encountered during the request, or 06:17:38 None if the response was received successfully. 06:17:38 06:17:38 :return: A new ``Retry`` object. 06:17:38 """ 06:17:38 if self.total is False and error: 06:17:38 # Disabled, indicate to re-raise the error. 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 06:17:38 total = self.total 06:17:38 if total is not None: 06:17:38 total -= 1 06:17:38 06:17:38 connect = self.connect 06:17:38 read = self.read 06:17:38 redirect = self.redirect 06:17:38 status_count = self.status 06:17:38 other = self.other 06:17:38 cause = "unknown" 06:17:38 status = None 06:17:38 redirect_location = None 06:17:38 06:17:38 if error and self._is_connection_error(error): 06:17:38 # Connect retry? 06:17:38 if connect is False: 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 elif connect is not None: 06:17:38 connect -= 1 06:17:38 06:17:38 elif error and self._is_read_error(error): 06:17:38 # Read retry? 06:17:38 if read is False or method is None or not self._is_method_retryable(method): 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 elif read is not None: 06:17:38 read -= 1 06:17:38 06:17:38 elif error: 06:17:38 # Other retry? 06:17:38 if other is not None: 06:17:38 other -= 1 06:17:38 06:17:38 elif response and response.get_redirect_location(): 06:17:38 # Redirect retry? 06:17:38 if redirect is not None: 06:17:38 redirect -= 1 06:17:38 cause = "too many redirects" 06:17:38 response_redirect_location = response.get_redirect_location() 06:17:38 if response_redirect_location: 06:17:38 redirect_location = response_redirect_location 06:17:38 status = response.status 06:17:38 06:17:38 else: 06:17:38 # Incrementing because of a server error like a 500 in 06:17:38 # status_forcelist and the given method is in the allowed_methods 06:17:38 cause = ResponseError.GENERIC_ERROR 06:17:38 if response and response.status: 06:17:38 if status_count is not None: 06:17:38 status_count -= 1 06:17:38 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 06:17:38 status = response.status 06:17:38 06:17:38 history = self.history + ( 06:17:38 RequestHistory(method, url, error, status, redirect_location), 06:17:38 ) 06:17:38 06:17:38 new_retry = self.new( 06:17:38 total=total, 06:17:38 connect=connect, 06:17:38 read=read, 06:17:38 redirect=redirect, 06:17:38 status=status_count, 06:17:38 other=other, 06:17:38 history=history, 06:17:38 ) 06:17:38 06:17:38 if new_retry.is_exhausted(): 06:17:38 reason = error or ResponseError(cause) 06:17:38 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8190): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDR-OC/mapping=XPDR1-CLIENT1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 06:17:38 06:17:38 During handling of the above exception, another exception occurred: 06:17:38 06:17:38 self = 06:17:38 06:17:38 def test_07_mpdr_portmapping_CLIENT1(self): 06:17:38 > response = test_utils.get_portmapping_node_attr("XPDR-OC", "mapping", "XPDR1-CLIENT1") 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 06:17:38 transportpce_tests/oc/test01_portmapping.py:102: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 transportpce_tests/common/test_utils.py:492: in get_portmapping_node_attr 06:17:38 response = get_request(target_url) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 transportpce_tests/common/test_utils.py:116: in get_request 06:17:38 return requests.request( 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/api.py:59: in request 06:17:38 return session.request(method=method, url=url, **kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/sessions.py:589: in request 06:17:38 resp = self.send(prep, **send_kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/sessions.py:703: in send 06:17:38 r = adapter.send(request, **kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = 06:17:38 request = , stream = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 06:17:38 proxies = OrderedDict() 06:17:38 06:17:38 def send( 06:17:38 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 06:17:38 ): 06:17:38 """Sends PreparedRequest object. Returns Response object. 06:17:38 06:17:38 :param request: The :class:`PreparedRequest ` being sent. 06:17:38 :param stream: (optional) Whether to stream the request content. 06:17:38 :param timeout: (optional) How long to wait for the server to send 06:17:38 data before giving up, as a float, or a :ref:`(connect timeout, 06:17:38 read timeout) ` tuple. 06:17:38 :type timeout: float or tuple or urllib3 Timeout object 06:17:38 :param verify: (optional) Either a boolean, in which case it controls whether 06:17:38 we verify the server's TLS certificate, or a string, in which case it 06:17:38 must be a path to a CA bundle to use 06:17:38 :param cert: (optional) Any user-provided SSL certificate to be trusted. 06:17:38 :param proxies: (optional) The proxies dictionary to apply to the request. 06:17:38 :rtype: requests.Response 06:17:38 """ 06:17:38 06:17:38 try: 06:17:38 conn = self.get_connection_with_tls_context( 06:17:38 request, verify, proxies=proxies, cert=cert 06:17:38 ) 06:17:38 except LocationValueError as e: 06:17:38 raise InvalidURL(e, request=request) 06:17:38 06:17:38 self.cert_verify(conn, request.url, verify, cert) 06:17:38 url = self.request_url(request, proxies) 06:17:38 self.add_headers( 06:17:38 request, 06:17:38 stream=stream, 06:17:38 timeout=timeout, 06:17:38 verify=verify, 06:17:38 cert=cert, 06:17:38 proxies=proxies, 06:17:38 ) 06:17:38 06:17:38 chunked = not (request.body is None or "Content-Length" in request.headers) 06:17:38 06:17:38 if isinstance(timeout, tuple): 06:17:38 try: 06:17:38 connect, read = timeout 06:17:38 timeout = TimeoutSauce(connect=connect, read=read) 06:17:38 except ValueError: 06:17:38 raise ValueError( 06:17:38 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 06:17:38 f"or a single float to set both timeouts to the same value." 06:17:38 ) 06:17:38 elif isinstance(timeout, TimeoutSauce): 06:17:38 pass 06:17:38 else: 06:17:38 timeout = TimeoutSauce(connect=timeout, read=timeout) 06:17:38 06:17:38 try: 06:17:38 resp = conn.urlopen( 06:17:38 method=request.method, 06:17:38 url=url, 06:17:38 body=request.body, 06:17:38 headers=request.headers, 06:17:38 redirect=False, 06:17:38 assert_same_host=False, 06:17:38 preload_content=False, 06:17:38 decode_content=False, 06:17:38 retries=self.max_retries, 06:17:38 timeout=timeout, 06:17:38 chunked=chunked, 06:17:38 ) 06:17:38 06:17:38 except (ProtocolError, OSError) as err: 06:17:38 raise ConnectionError(err, request=request) 06:17:38 06:17:38 except MaxRetryError as e: 06:17:38 if isinstance(e.reason, ConnectTimeoutError): 06:17:38 # TODO: Remove this in 3.0.0: see #2811 06:17:38 if not isinstance(e.reason, NewConnectionError): 06:17:38 raise ConnectTimeout(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, ResponseError): 06:17:38 raise RetryError(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, _ProxyError): 06:17:38 raise ProxyError(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, _SSLError): 06:17:38 # This branch is for urllib3 v1.22 and later. 06:17:38 raise SSLError(e, request=request) 06:17:38 06:17:38 > raise ConnectionError(e, request=request) 06:17:38 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8190): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDR-OC/mapping=XPDR1-CLIENT1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 06:17:38 ----------------------------- Captured stdout call ----------------------------- 06:17:38 execution of test_07_mpdr_portmapping_CLIENT1 06:17:38 _________ TransportpceOCPortMappingTesting.test_08_mpdr_switching_pool _________ 06:17:38 06:17:38 self = 06:17:38 06:17:38 def _new_conn(self) -> socket.socket: 06:17:38 """Establish a socket connection and set nodelay settings on it. 06:17:38 06:17:38 :return: New socket connection. 06:17:38 """ 06:17:38 try: 06:17:38 > sock = connection.create_connection( 06:17:38 (self._dns_host, self.port), 06:17:38 self.timeout, 06:17:38 source_address=self.source_address, 06:17:38 socket_options=self.socket_options, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:198: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 06:17:38 raise err 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 address = ('localhost', 8190), timeout = 30, source_address = None 06:17:38 socket_options = [(6, 1, 1)] 06:17:38 06:17:38 def create_connection( 06:17:38 address: tuple[str, int], 06:17:38 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 06:17:38 source_address: tuple[str, int] | None = None, 06:17:38 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 06:17:38 ) -> socket.socket: 06:17:38 """Connect to *address* and return the socket object. 06:17:38 06:17:38 Convenience function. Connect to *address* (a 2-tuple ``(host, 06:17:38 port)``) and return the socket object. Passing the optional 06:17:38 *timeout* parameter will set the timeout on the socket instance 06:17:38 before attempting to connect. If no *timeout* is supplied, the 06:17:38 global default timeout setting returned by :func:`socket.getdefaulttimeout` 06:17:38 is used. If *source_address* is set it must be a tuple of (host, port) 06:17:38 for the socket to bind as a source address before making the connection. 06:17:38 An host of '' or port 0 tells the OS to use the default. 06:17:38 """ 06:17:38 06:17:38 host, port = address 06:17:38 if host.startswith("["): 06:17:38 host = host.strip("[]") 06:17:38 err = None 06:17:38 06:17:38 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 06:17:38 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 06:17:38 # The original create_connection function always returns all records. 06:17:38 family = allowed_gai_family() 06:17:38 06:17:38 try: 06:17:38 host.encode("idna") 06:17:38 except UnicodeError: 06:17:38 raise LocationParseError(f"'{host}', label empty or too long") from None 06:17:38 06:17:38 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 06:17:38 af, socktype, proto, canonname, sa = res 06:17:38 sock = None 06:17:38 try: 06:17:38 sock = socket.socket(af, socktype, proto) 06:17:38 06:17:38 # If provided, set socket level options before connecting. 06:17:38 _set_socket_options(sock, socket_options) 06:17:38 06:17:38 if timeout is not _DEFAULT_TIMEOUT: 06:17:38 sock.settimeout(timeout) 06:17:38 if source_address: 06:17:38 sock.bind(source_address) 06:17:38 > sock.connect(sa) 06:17:38 E ConnectionRefusedError: [Errno 111] Connection refused 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 06:17:38 06:17:38 The above exception was the direct cause of the following exception: 06:17:38 06:17:38 self = 06:17:38 method = 'GET' 06:17:38 url = '/rests/data/transportpce-portmapping:network/nodes=XPDR-OC/switching-pool-lcp=1' 06:17:38 body = None 06:17:38 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 06:17:38 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 06:17:38 redirect = False, assert_same_host = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 06:17:38 release_conn = False, chunked = False, body_pos = None, preload_content = False 06:17:38 decode_content = False, response_kw = {} 06:17:38 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDR-OC/switching-pool-lcp=1', query=None, fragment=None) 06:17:38 destination_scheme = None, conn = None, release_this_conn = True 06:17:38 http_tunnel_required = False, err = None, clean_exit = False 06:17:38 06:17:38 def urlopen( # type: ignore[override] 06:17:38 self, 06:17:38 method: str, 06:17:38 url: str, 06:17:38 body: _TYPE_BODY | None = None, 06:17:38 headers: typing.Mapping[str, str] | None = None, 06:17:38 retries: Retry | bool | int | None = None, 06:17:38 redirect: bool = True, 06:17:38 assert_same_host: bool = True, 06:17:38 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 06:17:38 pool_timeout: int | None = None, 06:17:38 release_conn: bool | None = None, 06:17:38 chunked: bool = False, 06:17:38 body_pos: _TYPE_BODY_POSITION | None = None, 06:17:38 preload_content: bool = True, 06:17:38 decode_content: bool = True, 06:17:38 **response_kw: typing.Any, 06:17:38 ) -> BaseHTTPResponse: 06:17:38 """ 06:17:38 Get a connection from the pool and perform an HTTP request. This is the 06:17:38 lowest level call for making a request, so you'll need to specify all 06:17:38 the raw details. 06:17:38 06:17:38 .. note:: 06:17:38 06:17:38 More commonly, it's appropriate to use a convenience method 06:17:38 such as :meth:`request`. 06:17:38 06:17:38 .. note:: 06:17:38 06:17:38 `release_conn` will only behave as expected if 06:17:38 `preload_content=False` because we want to make 06:17:38 `preload_content=False` the default behaviour someday soon without 06:17:38 breaking backwards compatibility. 06:17:38 06:17:38 :param method: 06:17:38 HTTP request method (such as GET, POST, PUT, etc.) 06:17:38 06:17:38 :param url: 06:17:38 The URL to perform the request on. 06:17:38 06:17:38 :param body: 06:17:38 Data to send in the request body, either :class:`str`, :class:`bytes`, 06:17:38 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 06:17:38 06:17:38 :param headers: 06:17:38 Dictionary of custom headers to send, such as User-Agent, 06:17:38 If-None-Match, etc. If None, pool headers are used. If provided, 06:17:38 these headers completely replace any pool-specific headers. 06:17:38 06:17:38 :param retries: 06:17:38 Configure the number of retries to allow before raising a 06:17:38 :class:`~urllib3.exceptions.MaxRetryError` exception. 06:17:38 06:17:38 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 06:17:38 :class:`~urllib3.util.retry.Retry` object for fine-grained control 06:17:38 over different types of retries. 06:17:38 Pass an integer number to retry connection errors that many times, 06:17:38 but no other types of errors. Pass zero to never retry. 06:17:38 06:17:38 If ``False``, then retries are disabled and any exception is raised 06:17:38 immediately. Also, instead of raising a MaxRetryError on redirects, 06:17:38 the redirect response will be returned. 06:17:38 06:17:38 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 06:17:38 06:17:38 :param redirect: 06:17:38 If True, automatically handle redirects (status codes 301, 302, 06:17:38 303, 307, 308). Each redirect counts as a retry. Disabling retries 06:17:38 will disable redirect, too. 06:17:38 06:17:38 :param assert_same_host: 06:17:38 If ``True``, will make sure that the host of the pool requests is 06:17:38 consistent else will raise HostChangedError. When ``False``, you can 06:17:38 use the pool on an HTTP proxy and request foreign hosts. 06:17:38 06:17:38 :param timeout: 06:17:38 If specified, overrides the default timeout for this one 06:17:38 request. It may be a float (in seconds) or an instance of 06:17:38 :class:`urllib3.util.Timeout`. 06:17:38 06:17:38 :param pool_timeout: 06:17:38 If set and the pool is set to block=True, then this method will 06:17:38 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 06:17:38 connection is available within the time period. 06:17:38 06:17:38 :param bool preload_content: 06:17:38 If True, the response's body will be preloaded into memory. 06:17:38 06:17:38 :param bool decode_content: 06:17:38 If True, will attempt to decode the body based on the 06:17:38 'content-encoding' header. 06:17:38 06:17:38 :param release_conn: 06:17:38 If False, then the urlopen call will not release the connection 06:17:38 back into the pool once a response is received (but will release if 06:17:38 you read the entire contents of the response such as when 06:17:38 `preload_content=True`). This is useful if you're not preloading 06:17:38 the response's content immediately. You will need to call 06:17:38 ``r.release_conn()`` on the response ``r`` to return the connection 06:17:38 back into the pool. If None, it takes the value of ``preload_content`` 06:17:38 which defaults to ``True``. 06:17:38 06:17:38 :param bool chunked: 06:17:38 If True, urllib3 will send the body using chunked transfer 06:17:38 encoding. Otherwise, urllib3 will send the body using the standard 06:17:38 content-length form. Defaults to False. 06:17:38 06:17:38 :param int body_pos: 06:17:38 Position to seek to in file-like body in the event of a retry or 06:17:38 redirect. Typically this won't need to be set because urllib3 will 06:17:38 auto-populate the value when needed. 06:17:38 """ 06:17:38 parsed_url = parse_url(url) 06:17:38 destination_scheme = parsed_url.scheme 06:17:38 06:17:38 if headers is None: 06:17:38 headers = self.headers 06:17:38 06:17:38 if not isinstance(retries, Retry): 06:17:38 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 06:17:38 06:17:38 if release_conn is None: 06:17:38 release_conn = preload_content 06:17:38 06:17:38 # Check host 06:17:38 if assert_same_host and not self.is_same_host(url): 06:17:38 raise HostChangedError(self, url, retries) 06:17:38 06:17:38 # Ensure that the URL we're connecting to is properly encoded 06:17:38 if url.startswith("/"): 06:17:38 url = to_str(_encode_target(url)) 06:17:38 else: 06:17:38 url = to_str(parsed_url.url) 06:17:38 06:17:38 conn = None 06:17:38 06:17:38 # Track whether `conn` needs to be released before 06:17:38 # returning/raising/recursing. Update this variable if necessary, and 06:17:38 # leave `release_conn` constant throughout the function. That way, if 06:17:38 # the function recurses, the original value of `release_conn` will be 06:17:38 # passed down into the recursive call, and its value will be respected. 06:17:38 # 06:17:38 # See issue #651 [1] for details. 06:17:38 # 06:17:38 # [1] 06:17:38 release_this_conn = release_conn 06:17:38 06:17:38 http_tunnel_required = connection_requires_http_tunnel( 06:17:38 self.proxy, self.proxy_config, destination_scheme 06:17:38 ) 06:17:38 06:17:38 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 06:17:38 # have to copy the headers dict so we can safely change it without those 06:17:38 # changes being reflected in anyone else's copy. 06:17:38 if not http_tunnel_required: 06:17:38 headers = headers.copy() # type: ignore[attr-defined] 06:17:38 headers.update(self.proxy_headers) # type: ignore[union-attr] 06:17:38 06:17:38 # Must keep the exception bound to a separate variable or else Python 3 06:17:38 # complains about UnboundLocalError. 06:17:38 err = None 06:17:38 06:17:38 # Keep track of whether we cleanly exited the except block. This 06:17:38 # ensures we do proper cleanup in finally. 06:17:38 clean_exit = False 06:17:38 06:17:38 # Rewind body position, if needed. Record current position 06:17:38 # for future rewinds in the event of a redirect/retry. 06:17:38 body_pos = set_file_position(body, body_pos) 06:17:38 06:17:38 try: 06:17:38 # Request a connection from the queue. 06:17:38 timeout_obj = self._get_timeout(timeout) 06:17:38 conn = self._get_conn(timeout=pool_timeout) 06:17:38 06:17:38 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 06:17:38 06:17:38 # Is this a closed/new connection that requires CONNECT tunnelling? 06:17:38 if self.proxy is not None and http_tunnel_required and conn.is_closed: 06:17:38 try: 06:17:38 self._prepare_proxy(conn) 06:17:38 except (BaseSSLError, OSError, SocketTimeout) as e: 06:17:38 self._raise_timeout( 06:17:38 err=e, url=self.proxy.url, timeout_value=conn.timeout 06:17:38 ) 06:17:38 raise 06:17:38 06:17:38 # If we're going to release the connection in ``finally:``, then 06:17:38 # the response doesn't need to know about the connection. Otherwise 06:17:38 # it will also try to release it and we'll have a double-release 06:17:38 # mess. 06:17:38 response_conn = conn if not release_conn else None 06:17:38 06:17:38 # Make the request on the HTTPConnection object 06:17:38 > response = self._make_request( 06:17:38 conn, 06:17:38 method, 06:17:38 url, 06:17:38 timeout=timeout_obj, 06:17:38 body=body, 06:17:38 headers=headers, 06:17:38 chunked=chunked, 06:17:38 retries=retries, 06:17:38 response_conn=response_conn, 06:17:38 preload_content=preload_content, 06:17:38 decode_content=decode_content, 06:17:38 **response_kw, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 06:17:38 conn.request( 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:494: in request 06:17:38 self.endheaders() 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 06:17:38 self._send_output(message_body, encode_chunked=encode_chunked) 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 06:17:38 self.send(msg) 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 06:17:38 self.connect() 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 06:17:38 self.sock = self._new_conn() 06:17:38 ^^^^^^^^^^^^^^^^ 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = 06:17:38 06:17:38 def _new_conn(self) -> socket.socket: 06:17:38 """Establish a socket connection and set nodelay settings on it. 06:17:38 06:17:38 :return: New socket connection. 06:17:38 """ 06:17:38 try: 06:17:38 sock = connection.create_connection( 06:17:38 (self._dns_host, self.port), 06:17:38 self.timeout, 06:17:38 source_address=self.source_address, 06:17:38 socket_options=self.socket_options, 06:17:38 ) 06:17:38 except socket.gaierror as e: 06:17:38 raise NameResolutionError(self.host, self, e) from e 06:17:38 except SocketTimeout as e: 06:17:38 raise ConnectTimeoutError( 06:17:38 self, 06:17:38 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 06:17:38 ) from e 06:17:38 06:17:38 except OSError as e: 06:17:38 > raise NewConnectionError( 06:17:38 self, f"Failed to establish a new connection: {e}" 06:17:38 ) from e 06:17:38 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 06:17:38 06:17:38 The above exception was the direct cause of the following exception: 06:17:38 06:17:38 self = 06:17:38 request = , stream = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 06:17:38 proxies = OrderedDict() 06:17:38 06:17:38 def send( 06:17:38 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 06:17:38 ): 06:17:38 """Sends PreparedRequest object. Returns Response object. 06:17:38 06:17:38 :param request: The :class:`PreparedRequest ` being sent. 06:17:38 :param stream: (optional) Whether to stream the request content. 06:17:38 :param timeout: (optional) How long to wait for the server to send 06:17:38 data before giving up, as a float, or a :ref:`(connect timeout, 06:17:38 read timeout) ` tuple. 06:17:38 :type timeout: float or tuple or urllib3 Timeout object 06:17:38 :param verify: (optional) Either a boolean, in which case it controls whether 06:17:38 we verify the server's TLS certificate, or a string, in which case it 06:17:38 must be a path to a CA bundle to use 06:17:38 :param cert: (optional) Any user-provided SSL certificate to be trusted. 06:17:38 :param proxies: (optional) The proxies dictionary to apply to the request. 06:17:38 :rtype: requests.Response 06:17:38 """ 06:17:38 06:17:38 try: 06:17:38 conn = self.get_connection_with_tls_context( 06:17:38 request, verify, proxies=proxies, cert=cert 06:17:38 ) 06:17:38 except LocationValueError as e: 06:17:38 raise InvalidURL(e, request=request) 06:17:38 06:17:38 self.cert_verify(conn, request.url, verify, cert) 06:17:38 url = self.request_url(request, proxies) 06:17:38 self.add_headers( 06:17:38 request, 06:17:38 stream=stream, 06:17:38 timeout=timeout, 06:17:38 verify=verify, 06:17:38 cert=cert, 06:17:38 proxies=proxies, 06:17:38 ) 06:17:38 06:17:38 chunked = not (request.body is None or "Content-Length" in request.headers) 06:17:38 06:17:38 if isinstance(timeout, tuple): 06:17:38 try: 06:17:38 connect, read = timeout 06:17:38 timeout = TimeoutSauce(connect=connect, read=read) 06:17:38 except ValueError: 06:17:38 raise ValueError( 06:17:38 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 06:17:38 f"or a single float to set both timeouts to the same value." 06:17:38 ) 06:17:38 elif isinstance(timeout, TimeoutSauce): 06:17:38 pass 06:17:38 else: 06:17:38 timeout = TimeoutSauce(connect=timeout, read=timeout) 06:17:38 06:17:38 try: 06:17:38 > resp = conn.urlopen( 06:17:38 method=request.method, 06:17:38 url=url, 06:17:38 body=request.body, 06:17:38 headers=request.headers, 06:17:38 redirect=False, 06:17:38 assert_same_host=False, 06:17:38 preload_content=False, 06:17:38 decode_content=False, 06:17:38 retries=self.max_retries, 06:17:38 timeout=timeout, 06:17:38 chunked=chunked, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/adapters.py:667: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 06:17:38 retries = retries.increment( 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 06:17:38 method = 'GET' 06:17:38 url = '/rests/data/transportpce-portmapping:network/nodes=XPDR-OC/switching-pool-lcp=1' 06:17:38 response = None 06:17:38 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 06:17:38 _pool = 06:17:38 _stacktrace = 06:17:38 06:17:38 def increment( 06:17:38 self, 06:17:38 method: str | None = None, 06:17:38 url: str | None = None, 06:17:38 response: BaseHTTPResponse | None = None, 06:17:38 error: Exception | None = None, 06:17:38 _pool: ConnectionPool | None = None, 06:17:38 _stacktrace: TracebackType | None = None, 06:17:38 ) -> Self: 06:17:38 """Return a new Retry object with incremented retry counters. 06:17:38 06:17:38 :param response: A response object, or None, if the server did not 06:17:38 return a response. 06:17:38 :type response: :class:`~urllib3.response.BaseHTTPResponse` 06:17:38 :param Exception error: An error encountered during the request, or 06:17:38 None if the response was received successfully. 06:17:38 06:17:38 :return: A new ``Retry`` object. 06:17:38 """ 06:17:38 if self.total is False and error: 06:17:38 # Disabled, indicate to re-raise the error. 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 06:17:38 total = self.total 06:17:38 if total is not None: 06:17:38 total -= 1 06:17:38 06:17:38 connect = self.connect 06:17:38 read = self.read 06:17:38 redirect = self.redirect 06:17:38 status_count = self.status 06:17:38 other = self.other 06:17:38 cause = "unknown" 06:17:38 status = None 06:17:38 redirect_location = None 06:17:38 06:17:38 if error and self._is_connection_error(error): 06:17:38 # Connect retry? 06:17:38 if connect is False: 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 elif connect is not None: 06:17:38 connect -= 1 06:17:38 06:17:38 elif error and self._is_read_error(error): 06:17:38 # Read retry? 06:17:38 if read is False or method is None or not self._is_method_retryable(method): 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 elif read is not None: 06:17:38 read -= 1 06:17:38 06:17:38 elif error: 06:17:38 # Other retry? 06:17:38 if other is not None: 06:17:38 other -= 1 06:17:38 06:17:38 elif response and response.get_redirect_location(): 06:17:38 # Redirect retry? 06:17:38 if redirect is not None: 06:17:38 redirect -= 1 06:17:38 cause = "too many redirects" 06:17:38 response_redirect_location = response.get_redirect_location() 06:17:38 if response_redirect_location: 06:17:38 redirect_location = response_redirect_location 06:17:38 status = response.status 06:17:38 06:17:38 else: 06:17:38 # Incrementing because of a server error like a 500 in 06:17:38 # status_forcelist and the given method is in the allowed_methods 06:17:38 cause = ResponseError.GENERIC_ERROR 06:17:38 if response and response.status: 06:17:38 if status_count is not None: 06:17:38 status_count -= 1 06:17:38 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 06:17:38 status = response.status 06:17:38 06:17:38 history = self.history + ( 06:17:38 RequestHistory(method, url, error, status, redirect_location), 06:17:38 ) 06:17:38 06:17:38 new_retry = self.new( 06:17:38 total=total, 06:17:38 connect=connect, 06:17:38 read=read, 06:17:38 redirect=redirect, 06:17:38 status=status_count, 06:17:38 other=other, 06:17:38 history=history, 06:17:38 ) 06:17:38 06:17:38 if new_retry.is_exhausted(): 06:17:38 reason = error or ResponseError(cause) 06:17:38 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8190): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDR-OC/switching-pool-lcp=1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 06:17:38 06:17:38 During handling of the above exception, another exception occurred: 06:17:38 06:17:38 self = 06:17:38 06:17:38 def test_08_mpdr_switching_pool(self): 06:17:38 > response = test_utils.get_portmapping_node_attr("XPDR-OC", "switching-pool-lcp", "1") 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 06:17:38 transportpce_tests/oc/test01_portmapping.py:128: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 transportpce_tests/common/test_utils.py:492: in get_portmapping_node_attr 06:17:38 response = get_request(target_url) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 transportpce_tests/common/test_utils.py:116: in get_request 06:17:38 return requests.request( 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/api.py:59: in request 06:17:38 return session.request(method=method, url=url, **kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/sessions.py:589: in request 06:17:38 resp = self.send(prep, **send_kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/sessions.py:703: in send 06:17:38 r = adapter.send(request, **kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = 06:17:38 request = , stream = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 06:17:38 proxies = OrderedDict() 06:17:38 06:17:38 def send( 06:17:38 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 06:17:38 ): 06:17:38 """Sends PreparedRequest object. Returns Response object. 06:17:38 06:17:38 :param request: The :class:`PreparedRequest ` being sent. 06:17:38 :param stream: (optional) Whether to stream the request content. 06:17:38 :param timeout: (optional) How long to wait for the server to send 06:17:38 data before giving up, as a float, or a :ref:`(connect timeout, 06:17:38 read timeout) ` tuple. 06:17:38 :type timeout: float or tuple or urllib3 Timeout object 06:17:38 :param verify: (optional) Either a boolean, in which case it controls whether 06:17:38 we verify the server's TLS certificate, or a string, in which case it 06:17:38 must be a path to a CA bundle to use 06:17:38 :param cert: (optional) Any user-provided SSL certificate to be trusted. 06:17:38 :param proxies: (optional) The proxies dictionary to apply to the request. 06:17:38 :rtype: requests.Response 06:17:38 """ 06:17:38 06:17:38 try: 06:17:38 conn = self.get_connection_with_tls_context( 06:17:38 request, verify, proxies=proxies, cert=cert 06:17:38 ) 06:17:38 except LocationValueError as e: 06:17:38 raise InvalidURL(e, request=request) 06:17:38 06:17:38 self.cert_verify(conn, request.url, verify, cert) 06:17:38 url = self.request_url(request, proxies) 06:17:38 self.add_headers( 06:17:38 request, 06:17:38 stream=stream, 06:17:38 timeout=timeout, 06:17:38 verify=verify, 06:17:38 cert=cert, 06:17:38 proxies=proxies, 06:17:38 ) 06:17:38 06:17:38 chunked = not (request.body is None or "Content-Length" in request.headers) 06:17:38 06:17:38 if isinstance(timeout, tuple): 06:17:38 try: 06:17:38 connect, read = timeout 06:17:38 timeout = TimeoutSauce(connect=connect, read=read) 06:17:38 except ValueError: 06:17:38 raise ValueError( 06:17:38 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 06:17:38 f"or a single float to set both timeouts to the same value." 06:17:38 ) 06:17:38 elif isinstance(timeout, TimeoutSauce): 06:17:38 pass 06:17:38 else: 06:17:38 timeout = TimeoutSauce(connect=timeout, read=timeout) 06:17:38 06:17:38 try: 06:17:38 resp = conn.urlopen( 06:17:38 method=request.method, 06:17:38 url=url, 06:17:38 body=request.body, 06:17:38 headers=request.headers, 06:17:38 redirect=False, 06:17:38 assert_same_host=False, 06:17:38 preload_content=False, 06:17:38 decode_content=False, 06:17:38 retries=self.max_retries, 06:17:38 timeout=timeout, 06:17:38 chunked=chunked, 06:17:38 ) 06:17:38 06:17:38 except (ProtocolError, OSError) as err: 06:17:38 raise ConnectionError(err, request=request) 06:17:38 06:17:38 except MaxRetryError as e: 06:17:38 if isinstance(e.reason, ConnectTimeoutError): 06:17:38 # TODO: Remove this in 3.0.0: see #2811 06:17:38 if not isinstance(e.reason, NewConnectionError): 06:17:38 raise ConnectTimeout(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, ResponseError): 06:17:38 raise RetryError(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, _ProxyError): 06:17:38 raise ProxyError(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, _SSLError): 06:17:38 # This branch is for urllib3 v1.22 and later. 06:17:38 raise SSLError(e, request=request) 06:17:38 06:17:38 > raise ConnectionError(e, request=request) 06:17:38 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8190): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDR-OC/switching-pool-lcp=1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 06:17:38 ----------------------------- Captured stdout call ----------------------------- 06:17:38 execution of test_08_mpdr_switching_pool 06:17:38 _________ TransportpceOCPortMappingTesting.test_09_check_mccapprofile __________ 06:17:38 06:17:38 self = 06:17:38 06:17:38 def _new_conn(self) -> socket.socket: 06:17:38 """Establish a socket connection and set nodelay settings on it. 06:17:38 06:17:38 :return: New socket connection. 06:17:38 """ 06:17:38 try: 06:17:38 > sock = connection.create_connection( 06:17:38 (self._dns_host, self.port), 06:17:38 self.timeout, 06:17:38 source_address=self.source_address, 06:17:38 socket_options=self.socket_options, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:198: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 06:17:38 raise err 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 address = ('localhost', 8190), timeout = 30, source_address = None 06:17:38 socket_options = [(6, 1, 1)] 06:17:38 06:17:38 def create_connection( 06:17:38 address: tuple[str, int], 06:17:38 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 06:17:38 source_address: tuple[str, int] | None = None, 06:17:38 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 06:17:38 ) -> socket.socket: 06:17:38 """Connect to *address* and return the socket object. 06:17:38 06:17:38 Convenience function. Connect to *address* (a 2-tuple ``(host, 06:17:38 port)``) and return the socket object. Passing the optional 06:17:38 *timeout* parameter will set the timeout on the socket instance 06:17:38 before attempting to connect. If no *timeout* is supplied, the 06:17:38 global default timeout setting returned by :func:`socket.getdefaulttimeout` 06:17:38 is used. If *source_address* is set it must be a tuple of (host, port) 06:17:38 for the socket to bind as a source address before making the connection. 06:17:38 An host of '' or port 0 tells the OS to use the default. 06:17:38 """ 06:17:38 06:17:38 host, port = address 06:17:38 if host.startswith("["): 06:17:38 host = host.strip("[]") 06:17:38 err = None 06:17:38 06:17:38 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 06:17:38 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 06:17:38 # The original create_connection function always returns all records. 06:17:38 family = allowed_gai_family() 06:17:38 06:17:38 try: 06:17:38 host.encode("idna") 06:17:38 except UnicodeError: 06:17:38 raise LocationParseError(f"'{host}', label empty or too long") from None 06:17:38 06:17:38 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 06:17:38 af, socktype, proto, canonname, sa = res 06:17:38 sock = None 06:17:38 try: 06:17:38 sock = socket.socket(af, socktype, proto) 06:17:38 06:17:38 # If provided, set socket level options before connecting. 06:17:38 _set_socket_options(sock, socket_options) 06:17:38 06:17:38 if timeout is not _DEFAULT_TIMEOUT: 06:17:38 sock.settimeout(timeout) 06:17:38 if source_address: 06:17:38 sock.bind(source_address) 06:17:38 > sock.connect(sa) 06:17:38 E ConnectionRefusedError: [Errno 111] Connection refused 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 06:17:38 06:17:38 The above exception was the direct cause of the following exception: 06:17:38 06:17:38 self = 06:17:38 method = 'GET' 06:17:38 url = '/rests/data/transportpce-portmapping:network/nodes=XPDR-OC/mc-capabilities=XPDR-mcprofile' 06:17:38 body = None 06:17:38 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 06:17:38 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 06:17:38 redirect = False, assert_same_host = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 06:17:38 release_conn = False, chunked = False, body_pos = None, preload_content = False 06:17:38 decode_content = False, response_kw = {} 06:17:38 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDR-OC/mc-capabilities=XPDR-mcprofile', query=None, fragment=None) 06:17:38 destination_scheme = None, conn = None, release_this_conn = True 06:17:38 http_tunnel_required = False, err = None, clean_exit = False 06:17:38 06:17:38 def urlopen( # type: ignore[override] 06:17:38 self, 06:17:38 method: str, 06:17:38 url: str, 06:17:38 body: _TYPE_BODY | None = None, 06:17:38 headers: typing.Mapping[str, str] | None = None, 06:17:38 retries: Retry | bool | int | None = None, 06:17:38 redirect: bool = True, 06:17:38 assert_same_host: bool = True, 06:17:38 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 06:17:38 pool_timeout: int | None = None, 06:17:38 release_conn: bool | None = None, 06:17:38 chunked: bool = False, 06:17:38 body_pos: _TYPE_BODY_POSITION | None = None, 06:17:38 preload_content: bool = True, 06:17:38 decode_content: bool = True, 06:17:38 **response_kw: typing.Any, 06:17:38 ) -> BaseHTTPResponse: 06:17:38 """ 06:17:38 Get a connection from the pool and perform an HTTP request. This is the 06:17:38 lowest level call for making a request, so you'll need to specify all 06:17:38 the raw details. 06:17:38 06:17:38 .. note:: 06:17:38 06:17:38 More commonly, it's appropriate to use a convenience method 06:17:38 such as :meth:`request`. 06:17:38 06:17:38 .. note:: 06:17:38 06:17:38 `release_conn` will only behave as expected if 06:17:38 `preload_content=False` because we want to make 06:17:38 `preload_content=False` the default behaviour someday soon without 06:17:38 breaking backwards compatibility. 06:17:38 06:17:38 :param method: 06:17:38 HTTP request method (such as GET, POST, PUT, etc.) 06:17:38 06:17:38 :param url: 06:17:38 The URL to perform the request on. 06:17:38 06:17:38 :param body: 06:17:38 Data to send in the request body, either :class:`str`, :class:`bytes`, 06:17:38 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 06:17:38 06:17:38 :param headers: 06:17:38 Dictionary of custom headers to send, such as User-Agent, 06:17:38 If-None-Match, etc. If None, pool headers are used. If provided, 06:17:38 these headers completely replace any pool-specific headers. 06:17:38 06:17:38 :param retries: 06:17:38 Configure the number of retries to allow before raising a 06:17:38 :class:`~urllib3.exceptions.MaxRetryError` exception. 06:17:38 06:17:38 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 06:17:38 :class:`~urllib3.util.retry.Retry` object for fine-grained control 06:17:38 over different types of retries. 06:17:38 Pass an integer number to retry connection errors that many times, 06:17:38 but no other types of errors. Pass zero to never retry. 06:17:38 06:17:38 If ``False``, then retries are disabled and any exception is raised 06:17:38 immediately. Also, instead of raising a MaxRetryError on redirects, 06:17:38 the redirect response will be returned. 06:17:38 06:17:38 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 06:17:38 06:17:38 :param redirect: 06:17:38 If True, automatically handle redirects (status codes 301, 302, 06:17:38 303, 307, 308). Each redirect counts as a retry. Disabling retries 06:17:38 will disable redirect, too. 06:17:38 06:17:38 :param assert_same_host: 06:17:38 If ``True``, will make sure that the host of the pool requests is 06:17:38 consistent else will raise HostChangedError. When ``False``, you can 06:17:38 use the pool on an HTTP proxy and request foreign hosts. 06:17:38 06:17:38 :param timeout: 06:17:38 If specified, overrides the default timeout for this one 06:17:38 request. It may be a float (in seconds) or an instance of 06:17:38 :class:`urllib3.util.Timeout`. 06:17:38 06:17:38 :param pool_timeout: 06:17:38 If set and the pool is set to block=True, then this method will 06:17:38 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 06:17:38 connection is available within the time period. 06:17:38 06:17:38 :param bool preload_content: 06:17:38 If True, the response's body will be preloaded into memory. 06:17:38 06:17:38 :param bool decode_content: 06:17:38 If True, will attempt to decode the body based on the 06:17:38 'content-encoding' header. 06:17:38 06:17:38 :param release_conn: 06:17:38 If False, then the urlopen call will not release the connection 06:17:38 back into the pool once a response is received (but will release if 06:17:38 you read the entire contents of the response such as when 06:17:38 `preload_content=True`). This is useful if you're not preloading 06:17:38 the response's content immediately. You will need to call 06:17:38 ``r.release_conn()`` on the response ``r`` to return the connection 06:17:38 back into the pool. If None, it takes the value of ``preload_content`` 06:17:38 which defaults to ``True``. 06:17:38 06:17:38 :param bool chunked: 06:17:38 If True, urllib3 will send the body using chunked transfer 06:17:38 encoding. Otherwise, urllib3 will send the body using the standard 06:17:38 content-length form. Defaults to False. 06:17:38 06:17:38 :param int body_pos: 06:17:38 Position to seek to in file-like body in the event of a retry or 06:17:38 redirect. Typically this won't need to be set because urllib3 will 06:17:38 auto-populate the value when needed. 06:17:38 """ 06:17:38 parsed_url = parse_url(url) 06:17:38 destination_scheme = parsed_url.scheme 06:17:38 06:17:38 if headers is None: 06:17:38 headers = self.headers 06:17:38 06:17:38 if not isinstance(retries, Retry): 06:17:38 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 06:17:38 06:17:38 if release_conn is None: 06:17:38 release_conn = preload_content 06:17:38 06:17:38 # Check host 06:17:38 if assert_same_host and not self.is_same_host(url): 06:17:38 raise HostChangedError(self, url, retries) 06:17:38 06:17:38 # Ensure that the URL we're connecting to is properly encoded 06:17:38 if url.startswith("/"): 06:17:38 url = to_str(_encode_target(url)) 06:17:38 else: 06:17:38 url = to_str(parsed_url.url) 06:17:38 06:17:38 conn = None 06:17:38 06:17:38 # Track whether `conn` needs to be released before 06:17:38 # returning/raising/recursing. Update this variable if necessary, and 06:17:38 # leave `release_conn` constant throughout the function. That way, if 06:17:38 # the function recurses, the original value of `release_conn` will be 06:17:38 # passed down into the recursive call, and its value will be respected. 06:17:38 # 06:17:38 # See issue #651 [1] for details. 06:17:38 # 06:17:38 # [1] 06:17:38 release_this_conn = release_conn 06:17:38 06:17:38 http_tunnel_required = connection_requires_http_tunnel( 06:17:38 self.proxy, self.proxy_config, destination_scheme 06:17:38 ) 06:17:38 06:17:38 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 06:17:38 # have to copy the headers dict so we can safely change it without those 06:17:38 # changes being reflected in anyone else's copy. 06:17:38 if not http_tunnel_required: 06:17:38 headers = headers.copy() # type: ignore[attr-defined] 06:17:38 headers.update(self.proxy_headers) # type: ignore[union-attr] 06:17:38 06:17:38 # Must keep the exception bound to a separate variable or else Python 3 06:17:38 # complains about UnboundLocalError. 06:17:38 err = None 06:17:38 06:17:38 # Keep track of whether we cleanly exited the except block. This 06:17:38 # ensures we do proper cleanup in finally. 06:17:38 clean_exit = False 06:17:38 06:17:38 # Rewind body position, if needed. Record current position 06:17:38 # for future rewinds in the event of a redirect/retry. 06:17:38 body_pos = set_file_position(body, body_pos) 06:17:38 06:17:38 try: 06:17:38 # Request a connection from the queue. 06:17:38 timeout_obj = self._get_timeout(timeout) 06:17:38 conn = self._get_conn(timeout=pool_timeout) 06:17:38 06:17:38 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 06:17:38 06:17:38 # Is this a closed/new connection that requires CONNECT tunnelling? 06:17:38 if self.proxy is not None and http_tunnel_required and conn.is_closed: 06:17:38 try: 06:17:38 self._prepare_proxy(conn) 06:17:38 except (BaseSSLError, OSError, SocketTimeout) as e: 06:17:38 self._raise_timeout( 06:17:38 err=e, url=self.proxy.url, timeout_value=conn.timeout 06:17:38 ) 06:17:38 raise 06:17:38 06:17:38 # If we're going to release the connection in ``finally:``, then 06:17:38 # the response doesn't need to know about the connection. Otherwise 06:17:38 # it will also try to release it and we'll have a double-release 06:17:38 # mess. 06:17:38 response_conn = conn if not release_conn else None 06:17:38 06:17:38 # Make the request on the HTTPConnection object 06:17:38 > response = self._make_request( 06:17:38 conn, 06:17:38 method, 06:17:38 url, 06:17:38 timeout=timeout_obj, 06:17:38 body=body, 06:17:38 headers=headers, 06:17:38 chunked=chunked, 06:17:38 retries=retries, 06:17:38 response_conn=response_conn, 06:17:38 preload_content=preload_content, 06:17:38 decode_content=decode_content, 06:17:38 **response_kw, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 06:17:38 conn.request( 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:494: in request 06:17:38 self.endheaders() 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 06:17:38 self._send_output(message_body, encode_chunked=encode_chunked) 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 06:17:38 self.send(msg) 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 06:17:38 self.connect() 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 06:17:38 self.sock = self._new_conn() 06:17:38 ^^^^^^^^^^^^^^^^ 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = 06:17:38 06:17:38 def _new_conn(self) -> socket.socket: 06:17:38 """Establish a socket connection and set nodelay settings on it. 06:17:38 06:17:38 :return: New socket connection. 06:17:38 """ 06:17:38 try: 06:17:38 sock = connection.create_connection( 06:17:38 (self._dns_host, self.port), 06:17:38 self.timeout, 06:17:38 source_address=self.source_address, 06:17:38 socket_options=self.socket_options, 06:17:38 ) 06:17:38 except socket.gaierror as e: 06:17:38 raise NameResolutionError(self.host, self, e) from e 06:17:38 except SocketTimeout as e: 06:17:38 raise ConnectTimeoutError( 06:17:38 self, 06:17:38 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 06:17:38 ) from e 06:17:38 06:17:38 except OSError as e: 06:17:38 > raise NewConnectionError( 06:17:38 self, f"Failed to establish a new connection: {e}" 06:17:38 ) from e 06:17:38 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 06:17:38 06:17:38 The above exception was the direct cause of the following exception: 06:17:38 06:17:38 self = 06:17:38 request = , stream = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 06:17:38 proxies = OrderedDict() 06:17:38 06:17:38 def send( 06:17:38 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 06:17:38 ): 06:17:38 """Sends PreparedRequest object. Returns Response object. 06:17:38 06:17:38 :param request: The :class:`PreparedRequest ` being sent. 06:17:38 :param stream: (optional) Whether to stream the request content. 06:17:38 :param timeout: (optional) How long to wait for the server to send 06:17:38 data before giving up, as a float, or a :ref:`(connect timeout, 06:17:38 read timeout) ` tuple. 06:17:38 :type timeout: float or tuple or urllib3 Timeout object 06:17:38 :param verify: (optional) Either a boolean, in which case it controls whether 06:17:38 we verify the server's TLS certificate, or a string, in which case it 06:17:38 must be a path to a CA bundle to use 06:17:38 :param cert: (optional) Any user-provided SSL certificate to be trusted. 06:17:38 :param proxies: (optional) The proxies dictionary to apply to the request. 06:17:38 :rtype: requests.Response 06:17:38 """ 06:17:38 06:17:38 try: 06:17:38 conn = self.get_connection_with_tls_context( 06:17:38 request, verify, proxies=proxies, cert=cert 06:17:38 ) 06:17:38 except LocationValueError as e: 06:17:38 raise InvalidURL(e, request=request) 06:17:38 06:17:38 self.cert_verify(conn, request.url, verify, cert) 06:17:38 url = self.request_url(request, proxies) 06:17:38 self.add_headers( 06:17:38 request, 06:17:38 stream=stream, 06:17:38 timeout=timeout, 06:17:38 verify=verify, 06:17:38 cert=cert, 06:17:38 proxies=proxies, 06:17:38 ) 06:17:38 06:17:38 chunked = not (request.body is None or "Content-Length" in request.headers) 06:17:38 06:17:38 if isinstance(timeout, tuple): 06:17:38 try: 06:17:38 connect, read = timeout 06:17:38 timeout = TimeoutSauce(connect=connect, read=read) 06:17:38 except ValueError: 06:17:38 raise ValueError( 06:17:38 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 06:17:38 f"or a single float to set both timeouts to the same value." 06:17:38 ) 06:17:38 elif isinstance(timeout, TimeoutSauce): 06:17:38 pass 06:17:38 else: 06:17:38 timeout = TimeoutSauce(connect=timeout, read=timeout) 06:17:38 06:17:38 try: 06:17:38 > resp = conn.urlopen( 06:17:38 method=request.method, 06:17:38 url=url, 06:17:38 body=request.body, 06:17:38 headers=request.headers, 06:17:38 redirect=False, 06:17:38 assert_same_host=False, 06:17:38 preload_content=False, 06:17:38 decode_content=False, 06:17:38 retries=self.max_retries, 06:17:38 timeout=timeout, 06:17:38 chunked=chunked, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/adapters.py:667: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 06:17:38 retries = retries.increment( 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 06:17:38 method = 'GET' 06:17:38 url = '/rests/data/transportpce-portmapping:network/nodes=XPDR-OC/mc-capabilities=XPDR-mcprofile' 06:17:38 response = None 06:17:38 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 06:17:38 _pool = 06:17:38 _stacktrace = 06:17:38 06:17:38 def increment( 06:17:38 self, 06:17:38 method: str | None = None, 06:17:38 url: str | None = None, 06:17:38 response: BaseHTTPResponse | None = None, 06:17:38 error: Exception | None = None, 06:17:38 _pool: ConnectionPool | None = None, 06:17:38 _stacktrace: TracebackType | None = None, 06:17:38 ) -> Self: 06:17:38 """Return a new Retry object with incremented retry counters. 06:17:38 06:17:38 :param response: A response object, or None, if the server did not 06:17:38 return a response. 06:17:38 :type response: :class:`~urllib3.response.BaseHTTPResponse` 06:17:38 :param Exception error: An error encountered during the request, or 06:17:38 None if the response was received successfully. 06:17:38 06:17:38 :return: A new ``Retry`` object. 06:17:38 """ 06:17:38 if self.total is False and error: 06:17:38 # Disabled, indicate to re-raise the error. 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 06:17:38 total = self.total 06:17:38 if total is not None: 06:17:38 total -= 1 06:17:38 06:17:38 connect = self.connect 06:17:38 read = self.read 06:17:38 redirect = self.redirect 06:17:38 status_count = self.status 06:17:38 other = self.other 06:17:38 cause = "unknown" 06:17:38 status = None 06:17:38 redirect_location = None 06:17:38 06:17:38 if error and self._is_connection_error(error): 06:17:38 # Connect retry? 06:17:38 if connect is False: 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 elif connect is not None: 06:17:38 connect -= 1 06:17:38 06:17:38 elif error and self._is_read_error(error): 06:17:38 # Read retry? 06:17:38 if read is False or method is None or not self._is_method_retryable(method): 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 elif read is not None: 06:17:38 read -= 1 06:17:38 06:17:38 elif error: 06:17:38 # Other retry? 06:17:38 if other is not None: 06:17:38 other -= 1 06:17:38 06:17:38 elif response and response.get_redirect_location(): 06:17:38 # Redirect retry? 06:17:38 if redirect is not None: 06:17:38 redirect -= 1 06:17:38 cause = "too many redirects" 06:17:38 response_redirect_location = response.get_redirect_location() 06:17:38 if response_redirect_location: 06:17:38 redirect_location = response_redirect_location 06:17:38 status = response.status 06:17:38 06:17:38 else: 06:17:38 # Incrementing because of a server error like a 500 in 06:17:38 # status_forcelist and the given method is in the allowed_methods 06:17:38 cause = ResponseError.GENERIC_ERROR 06:17:38 if response and response.status: 06:17:38 if status_count is not None: 06:17:38 status_count -= 1 06:17:38 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 06:17:38 status = response.status 06:17:38 06:17:38 history = self.history + ( 06:17:38 RequestHistory(method, url, error, status, redirect_location), 06:17:38 ) 06:17:38 06:17:38 new_retry = self.new( 06:17:38 total=total, 06:17:38 connect=connect, 06:17:38 read=read, 06:17:38 redirect=redirect, 06:17:38 status=status_count, 06:17:38 other=other, 06:17:38 history=history, 06:17:38 ) 06:17:38 06:17:38 if new_retry.is_exhausted(): 06:17:38 reason = error or ResponseError(cause) 06:17:38 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8190): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDR-OC/mc-capabilities=XPDR-mcprofile (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 06:17:38 06:17:38 During handling of the above exception, another exception occurred: 06:17:38 06:17:38 self = 06:17:38 06:17:38 def test_09_check_mccapprofile(self): 06:17:38 > res = test_utils.get_portmapping_node_attr("XPDR-OC", "mc-capabilities", "XPDR-mcprofile") 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 06:17:38 transportpce_tests/oc/test01_portmapping.py:140: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 transportpce_tests/common/test_utils.py:492: in get_portmapping_node_attr 06:17:38 response = get_request(target_url) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 transportpce_tests/common/test_utils.py:116: in get_request 06:17:38 return requests.request( 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/api.py:59: in request 06:17:38 return session.request(method=method, url=url, **kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/sessions.py:589: in request 06:17:38 resp = self.send(prep, **send_kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/sessions.py:703: in send 06:17:38 r = adapter.send(request, **kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = 06:17:38 request = , stream = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 06:17:38 proxies = OrderedDict() 06:17:38 06:17:38 def send( 06:17:38 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 06:17:38 ): 06:17:38 """Sends PreparedRequest object. Returns Response object. 06:17:38 06:17:38 :param request: The :class:`PreparedRequest ` being sent. 06:17:38 :param stream: (optional) Whether to stream the request content. 06:17:38 :param timeout: (optional) How long to wait for the server to send 06:17:38 data before giving up, as a float, or a :ref:`(connect timeout, 06:17:38 read timeout) ` tuple. 06:17:38 :type timeout: float or tuple or urllib3 Timeout object 06:17:38 :param verify: (optional) Either a boolean, in which case it controls whether 06:17:38 we verify the server's TLS certificate, or a string, in which case it 06:17:38 must be a path to a CA bundle to use 06:17:38 :param cert: (optional) Any user-provided SSL certificate to be trusted. 06:17:38 :param proxies: (optional) The proxies dictionary to apply to the request. 06:17:38 :rtype: requests.Response 06:17:38 """ 06:17:38 06:17:38 try: 06:17:38 conn = self.get_connection_with_tls_context( 06:17:38 request, verify, proxies=proxies, cert=cert 06:17:38 ) 06:17:38 except LocationValueError as e: 06:17:38 raise InvalidURL(e, request=request) 06:17:38 06:17:38 self.cert_verify(conn, request.url, verify, cert) 06:17:38 url = self.request_url(request, proxies) 06:17:38 self.add_headers( 06:17:38 request, 06:17:38 stream=stream, 06:17:38 timeout=timeout, 06:17:38 verify=verify, 06:17:38 cert=cert, 06:17:38 proxies=proxies, 06:17:38 ) 06:17:38 06:17:38 chunked = not (request.body is None or "Content-Length" in request.headers) 06:17:38 06:17:38 if isinstance(timeout, tuple): 06:17:38 try: 06:17:38 connect, read = timeout 06:17:38 timeout = TimeoutSauce(connect=connect, read=read) 06:17:38 except ValueError: 06:17:38 raise ValueError( 06:17:38 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 06:17:38 f"or a single float to set both timeouts to the same value." 06:17:38 ) 06:17:38 elif isinstance(timeout, TimeoutSauce): 06:17:38 pass 06:17:38 else: 06:17:38 timeout = TimeoutSauce(connect=timeout, read=timeout) 06:17:38 06:17:38 try: 06:17:38 resp = conn.urlopen( 06:17:38 method=request.method, 06:17:38 url=url, 06:17:38 body=request.body, 06:17:38 headers=request.headers, 06:17:38 redirect=False, 06:17:38 assert_same_host=False, 06:17:38 preload_content=False, 06:17:38 decode_content=False, 06:17:38 retries=self.max_retries, 06:17:38 timeout=timeout, 06:17:38 chunked=chunked, 06:17:38 ) 06:17:38 06:17:38 except (ProtocolError, OSError) as err: 06:17:38 raise ConnectionError(err, request=request) 06:17:38 06:17:38 except MaxRetryError as e: 06:17:38 if isinstance(e.reason, ConnectTimeoutError): 06:17:38 # TODO: Remove this in 3.0.0: see #2811 06:17:38 if not isinstance(e.reason, NewConnectionError): 06:17:38 raise ConnectTimeout(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, ResponseError): 06:17:38 raise RetryError(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, _ProxyError): 06:17:38 raise ProxyError(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, _SSLError): 06:17:38 # This branch is for urllib3 v1.22 and later. 06:17:38 raise SSLError(e, request=request) 06:17:38 06:17:38 > raise ConnectionError(e, request=request) 06:17:38 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8190): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDR-OC/mc-capabilities=XPDR-mcprofile (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 06:17:38 ----------------------------- Captured stdout call ----------------------------- 06:17:38 execution of test_09_check_mccapprofile 06:17:38 ______ TransportpceOCPortMappingTesting.test_10_xpdr_device_disconnection ______ 06:17:38 06:17:38 self = 06:17:38 06:17:38 def _new_conn(self) -> socket.socket: 06:17:38 """Establish a socket connection and set nodelay settings on it. 06:17:38 06:17:38 :return: New socket connection. 06:17:38 """ 06:17:38 try: 06:17:38 > sock = connection.create_connection( 06:17:38 (self._dns_host, self.port), 06:17:38 self.timeout, 06:17:38 source_address=self.source_address, 06:17:38 socket_options=self.socket_options, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:198: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 06:17:38 raise err 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 address = ('localhost', 8190), timeout = 30, source_address = None 06:17:38 socket_options = [(6, 1, 1)] 06:17:38 06:17:38 def create_connection( 06:17:38 address: tuple[str, int], 06:17:38 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 06:17:38 source_address: tuple[str, int] | None = None, 06:17:38 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 06:17:38 ) -> socket.socket: 06:17:38 """Connect to *address* and return the socket object. 06:17:38 06:17:38 Convenience function. Connect to *address* (a 2-tuple ``(host, 06:17:38 port)``) and return the socket object. Passing the optional 06:17:38 *timeout* parameter will set the timeout on the socket instance 06:17:38 before attempting to connect. If no *timeout* is supplied, the 06:17:38 global default timeout setting returned by :func:`socket.getdefaulttimeout` 06:17:38 is used. If *source_address* is set it must be a tuple of (host, port) 06:17:38 for the socket to bind as a source address before making the connection. 06:17:38 An host of '' or port 0 tells the OS to use the default. 06:17:38 """ 06:17:38 06:17:38 host, port = address 06:17:38 if host.startswith("["): 06:17:38 host = host.strip("[]") 06:17:38 err = None 06:17:38 06:17:38 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 06:17:38 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 06:17:38 # The original create_connection function always returns all records. 06:17:38 family = allowed_gai_family() 06:17:38 06:17:38 try: 06:17:38 host.encode("idna") 06:17:38 except UnicodeError: 06:17:38 raise LocationParseError(f"'{host}', label empty or too long") from None 06:17:38 06:17:38 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 06:17:38 af, socktype, proto, canonname, sa = res 06:17:38 sock = None 06:17:38 try: 06:17:38 sock = socket.socket(af, socktype, proto) 06:17:38 06:17:38 # If provided, set socket level options before connecting. 06:17:38 _set_socket_options(sock, socket_options) 06:17:38 06:17:38 if timeout is not _DEFAULT_TIMEOUT: 06:17:38 sock.settimeout(timeout) 06:17:38 if source_address: 06:17:38 sock.bind(source_address) 06:17:38 > sock.connect(sa) 06:17:38 E ConnectionRefusedError: [Errno 111] Connection refused 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 06:17:38 06:17:38 The above exception was the direct cause of the following exception: 06:17:38 06:17:38 self = 06:17:38 method = 'DELETE' 06:17:38 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-OC' 06:17:38 body = None 06:17:38 headers = {'User-Agent': 'python-requests/2.32.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 06:17:38 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 06:17:38 redirect = False, assert_same_host = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 06:17:38 release_conn = False, chunked = False, body_pos = None, preload_content = False 06:17:38 decode_content = False, response_kw = {} 06:17:38 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-OC', query=None, fragment=None) 06:17:38 destination_scheme = None, conn = None, release_this_conn = True 06:17:38 http_tunnel_required = False, err = None, clean_exit = False 06:17:38 06:17:38 def urlopen( # type: ignore[override] 06:17:38 self, 06:17:38 method: str, 06:17:38 url: str, 06:17:38 body: _TYPE_BODY | None = None, 06:17:38 headers: typing.Mapping[str, str] | None = None, 06:17:38 retries: Retry | bool | int | None = None, 06:17:38 redirect: bool = True, 06:17:38 assert_same_host: bool = True, 06:17:38 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 06:17:38 pool_timeout: int | None = None, 06:17:38 release_conn: bool | None = None, 06:17:38 chunked: bool = False, 06:17:38 body_pos: _TYPE_BODY_POSITION | None = None, 06:17:38 preload_content: bool = True, 06:17:38 decode_content: bool = True, 06:17:38 **response_kw: typing.Any, 06:17:38 ) -> BaseHTTPResponse: 06:17:38 """ 06:17:38 Get a connection from the pool and perform an HTTP request. This is the 06:17:38 lowest level call for making a request, so you'll need to specify all 06:17:38 the raw details. 06:17:38 06:17:38 .. note:: 06:17:38 06:17:38 More commonly, it's appropriate to use a convenience method 06:17:38 such as :meth:`request`. 06:17:38 06:17:38 .. note:: 06:17:38 06:17:38 `release_conn` will only behave as expected if 06:17:38 `preload_content=False` because we want to make 06:17:38 `preload_content=False` the default behaviour someday soon without 06:17:38 breaking backwards compatibility. 06:17:38 06:17:38 :param method: 06:17:38 HTTP request method (such as GET, POST, PUT, etc.) 06:17:38 06:17:38 :param url: 06:17:38 The URL to perform the request on. 06:17:38 06:17:38 :param body: 06:17:38 Data to send in the request body, either :class:`str`, :class:`bytes`, 06:17:38 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 06:17:38 06:17:38 :param headers: 06:17:38 Dictionary of custom headers to send, such as User-Agent, 06:17:38 If-None-Match, etc. If None, pool headers are used. If provided, 06:17:38 these headers completely replace any pool-specific headers. 06:17:38 06:17:38 :param retries: 06:17:38 Configure the number of retries to allow before raising a 06:17:38 :class:`~urllib3.exceptions.MaxRetryError` exception. 06:17:38 06:17:38 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 06:17:38 :class:`~urllib3.util.retry.Retry` object for fine-grained control 06:17:38 over different types of retries. 06:17:38 Pass an integer number to retry connection errors that many times, 06:17:38 but no other types of errors. Pass zero to never retry. 06:17:38 06:17:38 If ``False``, then retries are disabled and any exception is raised 06:17:38 immediately. Also, instead of raising a MaxRetryError on redirects, 06:17:38 the redirect response will be returned. 06:17:38 06:17:38 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 06:17:38 06:17:38 :param redirect: 06:17:38 If True, automatically handle redirects (status codes 301, 302, 06:17:38 303, 307, 308). Each redirect counts as a retry. Disabling retries 06:17:38 will disable redirect, too. 06:17:38 06:17:38 :param assert_same_host: 06:17:38 If ``True``, will make sure that the host of the pool requests is 06:17:38 consistent else will raise HostChangedError. When ``False``, you can 06:17:38 use the pool on an HTTP proxy and request foreign hosts. 06:17:38 06:17:38 :param timeout: 06:17:38 If specified, overrides the default timeout for this one 06:17:38 request. It may be a float (in seconds) or an instance of 06:17:38 :class:`urllib3.util.Timeout`. 06:17:38 06:17:38 :param pool_timeout: 06:17:38 If set and the pool is set to block=True, then this method will 06:17:38 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 06:17:38 connection is available within the time period. 06:17:38 06:17:38 :param bool preload_content: 06:17:38 If True, the response's body will be preloaded into memory. 06:17:38 06:17:38 :param bool decode_content: 06:17:38 If True, will attempt to decode the body based on the 06:17:38 'content-encoding' header. 06:17:38 06:17:38 :param release_conn: 06:17:38 If False, then the urlopen call will not release the connection 06:17:38 back into the pool once a response is received (but will release if 06:17:38 you read the entire contents of the response such as when 06:17:38 `preload_content=True`). This is useful if you're not preloading 06:17:38 the response's content immediately. You will need to call 06:17:38 ``r.release_conn()`` on the response ``r`` to return the connection 06:17:38 back into the pool. If None, it takes the value of ``preload_content`` 06:17:38 which defaults to ``True``. 06:17:38 06:17:38 :param bool chunked: 06:17:38 If True, urllib3 will send the body using chunked transfer 06:17:38 encoding. Otherwise, urllib3 will send the body using the standard 06:17:38 content-length form. Defaults to False. 06:17:38 06:17:38 :param int body_pos: 06:17:38 Position to seek to in file-like body in the event of a retry or 06:17:38 redirect. Typically this won't need to be set because urllib3 will 06:17:38 auto-populate the value when needed. 06:17:38 """ 06:17:38 parsed_url = parse_url(url) 06:17:38 destination_scheme = parsed_url.scheme 06:17:38 06:17:38 if headers is None: 06:17:38 headers = self.headers 06:17:38 06:17:38 if not isinstance(retries, Retry): 06:17:38 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 06:17:38 06:17:38 if release_conn is None: 06:17:38 release_conn = preload_content 06:17:38 06:17:38 # Check host 06:17:38 if assert_same_host and not self.is_same_host(url): 06:17:38 raise HostChangedError(self, url, retries) 06:17:38 06:17:38 # Ensure that the URL we're connecting to is properly encoded 06:17:38 if url.startswith("/"): 06:17:38 url = to_str(_encode_target(url)) 06:17:38 else: 06:17:38 url = to_str(parsed_url.url) 06:17:38 06:17:38 conn = None 06:17:38 06:17:38 # Track whether `conn` needs to be released before 06:17:38 # returning/raising/recursing. Update this variable if necessary, and 06:17:38 # leave `release_conn` constant throughout the function. That way, if 06:17:38 # the function recurses, the original value of `release_conn` will be 06:17:38 # passed down into the recursive call, and its value will be respected. 06:17:38 # 06:17:38 # See issue #651 [1] for details. 06:17:38 # 06:17:38 # [1] 06:17:38 release_this_conn = release_conn 06:17:38 06:17:38 http_tunnel_required = connection_requires_http_tunnel( 06:17:38 self.proxy, self.proxy_config, destination_scheme 06:17:38 ) 06:17:38 06:17:38 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 06:17:38 # have to copy the headers dict so we can safely change it without those 06:17:38 # changes being reflected in anyone else's copy. 06:17:38 if not http_tunnel_required: 06:17:38 headers = headers.copy() # type: ignore[attr-defined] 06:17:38 headers.update(self.proxy_headers) # type: ignore[union-attr] 06:17:38 06:17:38 # Must keep the exception bound to a separate variable or else Python 3 06:17:38 # complains about UnboundLocalError. 06:17:38 err = None 06:17:38 06:17:38 # Keep track of whether we cleanly exited the except block. This 06:17:38 # ensures we do proper cleanup in finally. 06:17:38 clean_exit = False 06:17:38 06:17:38 # Rewind body position, if needed. Record current position 06:17:38 # for future rewinds in the event of a redirect/retry. 06:17:38 body_pos = set_file_position(body, body_pos) 06:17:38 06:17:38 try: 06:17:38 # Request a connection from the queue. 06:17:38 timeout_obj = self._get_timeout(timeout) 06:17:38 conn = self._get_conn(timeout=pool_timeout) 06:17:38 06:17:38 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 06:17:38 06:17:38 # Is this a closed/new connection that requires CONNECT tunnelling? 06:17:38 if self.proxy is not None and http_tunnel_required and conn.is_closed: 06:17:38 try: 06:17:38 self._prepare_proxy(conn) 06:17:38 except (BaseSSLError, OSError, SocketTimeout) as e: 06:17:38 self._raise_timeout( 06:17:38 err=e, url=self.proxy.url, timeout_value=conn.timeout 06:17:38 ) 06:17:38 raise 06:17:38 06:17:38 # If we're going to release the connection in ``finally:``, then 06:17:38 # the response doesn't need to know about the connection. Otherwise 06:17:38 # it will also try to release it and we'll have a double-release 06:17:38 # mess. 06:17:38 response_conn = conn if not release_conn else None 06:17:38 06:17:38 # Make the request on the HTTPConnection object 06:17:38 > response = self._make_request( 06:17:38 conn, 06:17:38 method, 06:17:38 url, 06:17:38 timeout=timeout_obj, 06:17:38 body=body, 06:17:38 headers=headers, 06:17:38 chunked=chunked, 06:17:38 retries=retries, 06:17:38 response_conn=response_conn, 06:17:38 preload_content=preload_content, 06:17:38 decode_content=decode_content, 06:17:38 **response_kw, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 06:17:38 conn.request( 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:494: in request 06:17:38 self.endheaders() 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 06:17:38 self._send_output(message_body, encode_chunked=encode_chunked) 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 06:17:38 self.send(msg) 06:17:38 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 06:17:38 self.connect() 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:325: in connect 06:17:38 self.sock = self._new_conn() 06:17:38 ^^^^^^^^^^^^^^^^ 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = 06:17:38 06:17:38 def _new_conn(self) -> socket.socket: 06:17:38 """Establish a socket connection and set nodelay settings on it. 06:17:38 06:17:38 :return: New socket connection. 06:17:38 """ 06:17:38 try: 06:17:38 sock = connection.create_connection( 06:17:38 (self._dns_host, self.port), 06:17:38 self.timeout, 06:17:38 source_address=self.source_address, 06:17:38 socket_options=self.socket_options, 06:17:38 ) 06:17:38 except socket.gaierror as e: 06:17:38 raise NameResolutionError(self.host, self, e) from e 06:17:38 except SocketTimeout as e: 06:17:38 raise ConnectTimeoutError( 06:17:38 self, 06:17:38 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 06:17:38 ) from e 06:17:38 06:17:38 except OSError as e: 06:17:38 > raise NewConnectionError( 06:17:38 self, f"Failed to establish a new connection: {e}" 06:17:38 ) from e 06:17:38 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connection.py:213: NewConnectionError 06:17:38 06:17:38 The above exception was the direct cause of the following exception: 06:17:38 06:17:38 self = 06:17:38 request = , stream = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 06:17:38 proxies = OrderedDict() 06:17:38 06:17:38 def send( 06:17:38 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 06:17:38 ): 06:17:38 """Sends PreparedRequest object. Returns Response object. 06:17:38 06:17:38 :param request: The :class:`PreparedRequest ` being sent. 06:17:38 :param stream: (optional) Whether to stream the request content. 06:17:38 :param timeout: (optional) How long to wait for the server to send 06:17:38 data before giving up, as a float, or a :ref:`(connect timeout, 06:17:38 read timeout) ` tuple. 06:17:38 :type timeout: float or tuple or urllib3 Timeout object 06:17:38 :param verify: (optional) Either a boolean, in which case it controls whether 06:17:38 we verify the server's TLS certificate, or a string, in which case it 06:17:38 must be a path to a CA bundle to use 06:17:38 :param cert: (optional) Any user-provided SSL certificate to be trusted. 06:17:38 :param proxies: (optional) The proxies dictionary to apply to the request. 06:17:38 :rtype: requests.Response 06:17:38 """ 06:17:38 06:17:38 try: 06:17:38 conn = self.get_connection_with_tls_context( 06:17:38 request, verify, proxies=proxies, cert=cert 06:17:38 ) 06:17:38 except LocationValueError as e: 06:17:38 raise InvalidURL(e, request=request) 06:17:38 06:17:38 self.cert_verify(conn, request.url, verify, cert) 06:17:38 url = self.request_url(request, proxies) 06:17:38 self.add_headers( 06:17:38 request, 06:17:38 stream=stream, 06:17:38 timeout=timeout, 06:17:38 verify=verify, 06:17:38 cert=cert, 06:17:38 proxies=proxies, 06:17:38 ) 06:17:38 06:17:38 chunked = not (request.body is None or "Content-Length" in request.headers) 06:17:38 06:17:38 if isinstance(timeout, tuple): 06:17:38 try: 06:17:38 connect, read = timeout 06:17:38 timeout = TimeoutSauce(connect=connect, read=read) 06:17:38 except ValueError: 06:17:38 raise ValueError( 06:17:38 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 06:17:38 f"or a single float to set both timeouts to the same value." 06:17:38 ) 06:17:38 elif isinstance(timeout, TimeoutSauce): 06:17:38 pass 06:17:38 else: 06:17:38 timeout = TimeoutSauce(connect=timeout, read=timeout) 06:17:38 06:17:38 try: 06:17:38 > resp = conn.urlopen( 06:17:38 method=request.method, 06:17:38 url=url, 06:17:38 body=request.body, 06:17:38 headers=request.headers, 06:17:38 redirect=False, 06:17:38 assert_same_host=False, 06:17:38 preload_content=False, 06:17:38 decode_content=False, 06:17:38 retries=self.max_retries, 06:17:38 timeout=timeout, 06:17:38 chunked=chunked, 06:17:38 ) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/adapters.py:667: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 06:17:38 retries = retries.increment( 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 06:17:38 method = 'DELETE' 06:17:38 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-OC' 06:17:38 response = None 06:17:38 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 06:17:38 _pool = 06:17:38 _stacktrace = 06:17:38 06:17:38 def increment( 06:17:38 self, 06:17:38 method: str | None = None, 06:17:38 url: str | None = None, 06:17:38 response: BaseHTTPResponse | None = None, 06:17:38 error: Exception | None = None, 06:17:38 _pool: ConnectionPool | None = None, 06:17:38 _stacktrace: TracebackType | None = None, 06:17:38 ) -> Self: 06:17:38 """Return a new Retry object with incremented retry counters. 06:17:38 06:17:38 :param response: A response object, or None, if the server did not 06:17:38 return a response. 06:17:38 :type response: :class:`~urllib3.response.BaseHTTPResponse` 06:17:38 :param Exception error: An error encountered during the request, or 06:17:38 None if the response was received successfully. 06:17:38 06:17:38 :return: A new ``Retry`` object. 06:17:38 """ 06:17:38 if self.total is False and error: 06:17:38 # Disabled, indicate to re-raise the error. 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 06:17:38 total = self.total 06:17:38 if total is not None: 06:17:38 total -= 1 06:17:38 06:17:38 connect = self.connect 06:17:38 read = self.read 06:17:38 redirect = self.redirect 06:17:38 status_count = self.status 06:17:38 other = self.other 06:17:38 cause = "unknown" 06:17:38 status = None 06:17:38 redirect_location = None 06:17:38 06:17:38 if error and self._is_connection_error(error): 06:17:38 # Connect retry? 06:17:38 if connect is False: 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 elif connect is not None: 06:17:38 connect -= 1 06:17:38 06:17:38 elif error and self._is_read_error(error): 06:17:38 # Read retry? 06:17:38 if read is False or method is None or not self._is_method_retryable(method): 06:17:38 raise reraise(type(error), error, _stacktrace) 06:17:38 elif read is not None: 06:17:38 read -= 1 06:17:38 06:17:38 elif error: 06:17:38 # Other retry? 06:17:38 if other is not None: 06:17:38 other -= 1 06:17:38 06:17:38 elif response and response.get_redirect_location(): 06:17:38 # Redirect retry? 06:17:38 if redirect is not None: 06:17:38 redirect -= 1 06:17:38 cause = "too many redirects" 06:17:38 response_redirect_location = response.get_redirect_location() 06:17:38 if response_redirect_location: 06:17:38 redirect_location = response_redirect_location 06:17:38 status = response.status 06:17:38 06:17:38 else: 06:17:38 # Incrementing because of a server error like a 500 in 06:17:38 # status_forcelist and the given method is in the allowed_methods 06:17:38 cause = ResponseError.GENERIC_ERROR 06:17:38 if response and response.status: 06:17:38 if status_count is not None: 06:17:38 status_count -= 1 06:17:38 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 06:17:38 status = response.status 06:17:38 06:17:38 history = self.history + ( 06:17:38 RequestHistory(method, url, error, status, redirect_location), 06:17:38 ) 06:17:38 06:17:38 new_retry = self.new( 06:17:38 total=total, 06:17:38 connect=connect, 06:17:38 read=read, 06:17:38 redirect=redirect, 06:17:38 status=status_count, 06:17:38 other=other, 06:17:38 history=history, 06:17:38 ) 06:17:38 06:17:38 if new_retry.is_exhausted(): 06:17:38 reason = error or ResponseError(cause) 06:17:38 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8190): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-OC (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 06:17:38 06:17:38 During handling of the above exception, another exception occurred: 06:17:38 06:17:38 self = 06:17:38 06:17:38 def test_10_xpdr_device_disconnection(self): 06:17:38 > response = test_utils.unmount_device("XPDR-OC") 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 06:17:38 transportpce_tests/oc/test01_portmapping.py:147: 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 transportpce_tests/common/test_utils.py:379: in unmount_device 06:17:38 response = delete_request(url[RESTCONF_VERSION].format('{}', node)) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 transportpce_tests/common/test_utils.py:133: in delete_request 06:17:38 return requests.request( 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/api.py:59: in request 06:17:38 return session.request(method=method, url=url, **kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/sessions.py:589: in request 06:17:38 resp = self.send(prep, **send_kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/sessions.py:703: in send 06:17:38 r = adapter.send(request, **kwargs) 06:17:38 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 06:17:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 06:17:38 06:17:38 self = 06:17:38 request = , stream = False 06:17:38 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 06:17:38 proxies = OrderedDict() 06:17:38 06:17:38 def send( 06:17:38 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 06:17:38 ): 06:17:38 """Sends PreparedRequest object. Returns Response object. 06:17:38 06:17:38 :param request: The :class:`PreparedRequest ` being sent. 06:17:38 :param stream: (optional) Whether to stream the request content. 06:17:38 :param timeout: (optional) How long to wait for the server to send 06:17:38 data before giving up, as a float, or a :ref:`(connect timeout, 06:17:38 read timeout) ` tuple. 06:17:38 :type timeout: float or tuple or urllib3 Timeout object 06:17:38 :param verify: (optional) Either a boolean, in which case it controls whether 06:17:38 we verify the server's TLS certificate, or a string, in which case it 06:17:38 must be a path to a CA bundle to use 06:17:38 :param cert: (optional) Any user-provided SSL certificate to be trusted. 06:17:38 :param proxies: (optional) The proxies dictionary to apply to the request. 06:17:38 :rtype: requests.Response 06:17:38 """ 06:17:38 06:17:38 try: 06:17:38 conn = self.get_connection_with_tls_context( 06:17:38 request, verify, proxies=proxies, cert=cert 06:17:38 ) 06:17:38 except LocationValueError as e: 06:17:38 raise InvalidURL(e, request=request) 06:17:38 06:17:38 self.cert_verify(conn, request.url, verify, cert) 06:17:38 url = self.request_url(request, proxies) 06:17:38 self.add_headers( 06:17:38 request, 06:17:38 stream=stream, 06:17:38 timeout=timeout, 06:17:38 verify=verify, 06:17:38 cert=cert, 06:17:38 proxies=proxies, 06:17:38 ) 06:17:38 06:17:38 chunked = not (request.body is None or "Content-Length" in request.headers) 06:17:38 06:17:38 if isinstance(timeout, tuple): 06:17:38 try: 06:17:38 connect, read = timeout 06:17:38 timeout = TimeoutSauce(connect=connect, read=read) 06:17:38 except ValueError: 06:17:38 raise ValueError( 06:17:38 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 06:17:38 f"or a single float to set both timeouts to the same value." 06:17:38 ) 06:17:38 elif isinstance(timeout, TimeoutSauce): 06:17:38 pass 06:17:38 else: 06:17:38 timeout = TimeoutSauce(connect=timeout, read=timeout) 06:17:38 06:17:38 try: 06:17:38 resp = conn.urlopen( 06:17:38 method=request.method, 06:17:38 url=url, 06:17:38 body=request.body, 06:17:38 headers=request.headers, 06:17:38 redirect=False, 06:17:38 assert_same_host=False, 06:17:38 preload_content=False, 06:17:38 decode_content=False, 06:17:38 retries=self.max_retries, 06:17:38 timeout=timeout, 06:17:38 chunked=chunked, 06:17:38 ) 06:17:38 06:17:38 except (ProtocolError, OSError) as err: 06:17:38 raise ConnectionError(err, request=request) 06:17:38 06:17:38 except MaxRetryError as e: 06:17:38 if isinstance(e.reason, ConnectTimeoutError): 06:17:38 # TODO: Remove this in 3.0.0: see #2811 06:17:38 if not isinstance(e.reason, NewConnectionError): 06:17:38 raise ConnectTimeout(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, ResponseError): 06:17:38 raise RetryError(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, _ProxyError): 06:17:38 raise ProxyError(e, request=request) 06:17:38 06:17:38 if isinstance(e.reason, _SSLError): 06:17:38 # This branch is for urllib3 v1.22 and later. 06:17:38 raise SSLError(e, request=request) 06:17:38 06:17:38 > raise ConnectionError(e, request=request) 06:17:38 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8190): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDR-OC (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 06:17:38 06:17:38 ../.tox/tests190/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 06:17:38 ----------------------------- Captured stdout call ----------------------------- 06:17:38 execution of test_10_xpdr_device_disconnection 06:17:38 =========================== short test summary info ============================ 06:17:38 FAILED transportpce_tests/oc/test01_portmapping.py::TransportpceOCPortMappingTesting::test_01_meta_data_insertion 06:17:38 FAILED transportpce_tests/oc/test01_portmapping.py::TransportpceOCPortMappingTesting::test_02_catlog_input_insertion 06:17:38 FAILED transportpce_tests/oc/test01_portmapping.py::TransportpceOCPortMappingTesting::test_03_xpdr_device_connection 06:17:38 FAILED transportpce_tests/oc/test01_portmapping.py::TransportpceOCPortMappingTesting::test_04_xpdr_device_connected 06:17:38 FAILED transportpce_tests/oc/test01_portmapping.py::TransportpceOCPortMappingTesting::test_05_xpdr_portmapping_info 06:17:38 FAILED transportpce_tests/oc/test01_portmapping.py::TransportpceOCPortMappingTesting::test_06_mpdr_portmapping_NETWORK5 06:17:38 FAILED transportpce_tests/oc/test01_portmapping.py::TransportpceOCPortMappingTesting::test_07_mpdr_portmapping_CLIENT1 06:17:38 FAILED transportpce_tests/oc/test01_portmapping.py::TransportpceOCPortMappingTesting::test_08_mpdr_switching_pool 06:17:38 FAILED transportpce_tests/oc/test01_portmapping.py::TransportpceOCPortMappingTesting::test_09_check_mccapprofile 06:17:38 FAILED transportpce_tests/oc/test01_portmapping.py::TransportpceOCPortMappingTesting::test_10_xpdr_device_disconnection 06:17:38 ERROR transportpce_tests/oc/test01_portmapping.py::TransportpceOCPortMappingTesting::test_10_xpdr_device_disconnection 06:17:38 10 failed, 1 error in 37.81s 06:17:38 tests190: exit 1 (38.11 seconds) /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh oc pid=5610 06:17:40 ....... [100%] 06:18:11 20 passed in 123.14s (0:02:03) 06:18:11 pytest -q transportpce_tests/pce/test02_pce_400G.py 06:18:28 ............ [100%] 06:18:58 12 passed in 46.47s 06:18:58 pytest -q transportpce_tests/pce/test03_gnpy.py 06:19:14 ........ [100%] 06:19:35 8 passed in 37.16s 06:19:35 pytest -q transportpce_tests/pce/test04_pce_bug_fix.py 06:20:07 ... [100%] 06:20:12 3 passed in 36.19s 06:20:12 tests190: FAIL ✖ in 46.7 seconds 06:20:12 testsPCE: OK ✔ in 5 minutes 7.77 seconds 06:20:12 tests_tapi: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 06:20:18 tests_tapi: freeze> python -m pip freeze --all 06:20:18 tests_tapi: bcrypt==4.3.0,certifi==2025.6.15,cffi==1.17.1,charset-normalizer==3.4.2,cryptography==45.0.4,dict2xml==1.7.6,idna==3.10,iniconfig==2.1.0,lxml==5.4.0,netconf-client==3.2.0,packaging==25.0,paramiko==3.5.1,pip==25.1.1,pluggy==1.6.0,psutil==7.0.0,pycparser==2.22,Pygments==2.19.2,PyNaCl==1.5.0,pytest==8.4.1,requests==2.32.4,setuptools==80.3.1,urllib3==2.5.0 06:20:18 tests_tapi: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh tapi 06:20:18 using environment variables from ./karaf221.env 06:20:18 pytest -q transportpce_tests/tapi/test01_abstracted_topology.py 06:21:26 ................................................... [100%] 06:30:32 51 passed in 613.93s (0:10:13) 06:30:33 pytest -q transportpce_tests/tapi/test02_full_topology.py 06:31:23 .................................... [100%] 06:35:58 36 passed in 325.12s (0:05:25) 06:35:58 pytest -q transportpce_tests/tapi/test03_tapi_device_change_notifications.py 06:36:44 ....................................................................... [100%] 06:44:03 71 passed in 484.98s (0:08:04) 06:44:03 pytest -q transportpce_tests/tapi/test04_topo_extension.py 06:44:54 ................... [100%] 06:46:24 19 passed in 140.52s (0:02:20) 06:46:24 tests_tapi: OK ✔ in 26 minutes 12.26 seconds 06:46:24 tests71: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 06:46:31 tests71: freeze> python -m pip freeze --all 06:46:31 tests71: bcrypt==4.3.0,certifi==2025.6.15,cffi==1.17.1,charset-normalizer==3.4.2,cryptography==45.0.4,dict2xml==1.7.6,idna==3.10,iniconfig==2.1.0,lxml==5.4.0,netconf-client==3.2.0,packaging==25.0,paramiko==3.5.1,pip==25.1.1,pluggy==1.6.0,psutil==7.0.0,pycparser==2.22,Pygments==2.19.2,PyNaCl==1.5.0,pytest==8.4.1,requests==2.32.4,setuptools==80.3.1,urllib3==2.5.0 06:46:31 tests71: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 7.1 06:46:31 using environment variables from ./karaf71.env 06:46:31 pytest -q transportpce_tests/7.1/test01_portmapping.py 06:47:02 ............ [100%] 06:47:15 12 passed in 44.24s 06:47:15 pytest -q transportpce_tests/7.1/test02_otn_renderer.py 06:47:41 .............................................................. [100%] 06:49:51 62 passed in 155.72s (0:02:35) 06:49:51 pytest -q transportpce_tests/7.1/test03_renderer_or_modes.py 06:50:23 ................................................ [100%] 06:52:07 48 passed in 135.56s (0:02:15) 06:52:07 pytest -q transportpce_tests/7.1/test04_renderer_regen_mode.py 06:52:33 ...................... [100%] 06:53:20 22 passed in 72.85s (0:01:12) 06:53:20 tests71: OK ✔ in 6 minutes 56.17 seconds 06:53:20 tests221: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 06:53:27 tests221: freeze> python -m pip freeze --all 06:53:27 tests221: bcrypt==4.3.0,certifi==2025.6.15,cffi==1.17.1,charset-normalizer==3.4.2,cryptography==45.0.4,dict2xml==1.7.6,idna==3.10,iniconfig==2.1.0,lxml==5.4.0,netconf-client==3.2.0,packaging==25.0,paramiko==3.5.1,pip==25.1.1,pluggy==1.6.0,psutil==7.0.0,pycparser==2.22,Pygments==2.19.2,PyNaCl==1.5.0,pytest==8.4.1,requests==2.32.4,setuptools==80.3.1,urllib3==2.5.0 06:53:27 tests221: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 2.2.1 06:53:27 using environment variables from ./karaf221.env 06:53:27 pytest -q transportpce_tests/2.2.1/test01_portmapping.py 06:54:03 ................................... [100%] 06:54:43 35 passed in 75.73s (0:01:15) 06:54:43 pytest -q transportpce_tests/2.2.1/test02_topo_portmapping.py 06:55:14 ...... [100%] 06:55:27 6 passed in 44.04s 06:55:27 pytest -q transportpce_tests/2.2.1/test03_topology.py 06:56:10 ............................................ [100%] 06:57:44 44 passed in 136.53s (0:02:16) 06:57:44 pytest -q transportpce_tests/2.2.1/test04_otn_topology.py 06:58:20 ............ [100%] 06:58:44 12 passed in 60.11s (0:01:00) 06:58:44 pytest -q transportpce_tests/2.2.1/test05_flex_grid.py 06:59:10 ................ [100%] 07:00:39 16 passed in 114.67s (0:01:54) 07:00:39 pytest -q transportpce_tests/2.2.1/test06_renderer_service_path_nominal.py 07:01:08 ............................... [100%] 07:01:15 31 passed in 34.87s 07:01:15 pytest -q transportpce_tests/2.2.1/test07_otn_renderer.py 07:01:49 .......................... [100%] 07:02:45 26 passed in 90.18s (0:01:30) 07:02:45 pytest -q transportpce_tests/2.2.1/test08_otn_sh_renderer.py 07:03:21 ...................... [100%] 07:04:24 22 passed in 98.53s (0:01:38) 07:04:24 pytest -q transportpce_tests/2.2.1/test09_olm.py 07:05:04 ........................................ [100%] 07:07:26 40 passed in 181.46s (0:03:01) 07:07:26 pytest -q transportpce_tests/2.2.1/test11_otn_end2end.py 07:08:08 ........................................................................ [ 74%] 07:13:44 ......................... [100%] 07:15:36 97 passed in 490.03s (0:08:10) 07:15:36 pytest -q transportpce_tests/2.2.1/test12_end2end.py 07:16:15 ...................................................... [100%] 07:23:01 54 passed in 445.20s (0:07:25) 07:23:01 pytest -q transportpce_tests/2.2.1/test14_otn_switch_end2end.py 07:23:56 ........................................................................ [ 71%] 07:29:05 ............................. [100%] 07:31:14 101 passed in 492.32s (0:08:12) 07:31:14 pytest -q transportpce_tests/2.2.1/test15_otn_end2end_with_intermediate_switch.py 07:32:08 ........................................................................ [ 67%] 07:37:54 ................................... [100%] 07:41:15 107 passed in 600.48s (0:10:00) 07:41:15 pytest -q transportpce_tests/2.2.1/test16_freq_end2end.py 07:41:57 ............................................. [100%] 07:44:34 45 passed in 198.64s (0:03:18) 07:44:34 tests221: OK ✔ in 51 minutes 13.42 seconds 07:44:34 tests121: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 07:44:41 tests121: freeze> python -m pip freeze --all 07:44:41 tests121: bcrypt==4.3.0,certifi==2025.6.15,cffi==1.17.1,charset-normalizer==3.4.2,cryptography==45.0.4,dict2xml==1.7.6,idna==3.10,iniconfig==2.1.0,lxml==5.4.0,netconf-client==3.2.0,packaging==25.0,paramiko==3.5.1,pip==25.1.1,pluggy==1.6.0,psutil==7.0.0,pycparser==2.22,Pygments==2.19.2,PyNaCl==1.5.0,pytest==8.4.1,requests==2.32.4,setuptools==80.9.0,urllib3==2.5.0 07:44:41 tests121: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 1.2.1 07:44:41 using environment variables from ./karaf121.env 07:44:41 pytest -q transportpce_tests/1.2.1/test01_portmapping.py 07:45:17 ..................... [100%] 07:46:07 21 passed in 85.30s (0:01:25) 07:46:07 pytest -q transportpce_tests/1.2.1/test02_topo_portmapping.py 07:46:38 ...... [100%] 07:46:51 6 passed in 44.21s 07:46:51 pytest -q transportpce_tests/1.2.1/test03_topology.py 07:47:33 ............................................ [100%] 07:49:08 44 passed in 136.12s (0:02:16) 07:49:08 pytest -q transportpce_tests/1.2.1/test04_renderer_service_path_nominal.py 07:49:38 ........................ [100%] 07:50:31 24 passed in 82.50s (0:01:22) 07:50:31 pytest -q transportpce_tests/1.2.1/test05_olm.py 07:51:10 ........................................ [100%] 07:53:32 40 passed in 180.92s (0:03:00) 07:53:32 pytest -q transportpce_tests/1.2.1/test06_end2end.py 07:54:11 ...................................................... [100%] 08:02:25 54 passed in 533.39s (0:08:53) 08:02:26 tests121: OK ✔ in 17 minutes 51.82 seconds 08:02:26 tests_hybrid: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 08:02:33 tests_hybrid: freeze> python -m pip freeze --all 08:02:33 tests_hybrid: bcrypt==4.3.0,certifi==2025.6.15,cffi==1.17.1,charset-normalizer==3.4.2,cryptography==45.0.4,dict2xml==1.7.6,idna==3.10,iniconfig==2.1.0,lxml==5.4.0,netconf-client==3.2.0,packaging==25.0,paramiko==3.5.1,pip==25.1.1,pluggy==1.6.0,psutil==7.0.0,pycparser==2.22,Pygments==2.19.2,PyNaCl==1.5.0,pytest==8.4.1,requests==2.32.4,setuptools==80.9.0,urllib3==2.5.0 08:02:33 tests_hybrid: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh hybrid 08:02:33 using environment variables from ./karaf121.env 08:02:33 pytest -q transportpce_tests/hybrid/test01_device_change_notifications.py 08:03:18 ................................................... [100%] 08:08:04 51 passed in 331.22s (0:05:31) 08:08:04 pytest -q transportpce_tests/hybrid/test02_B100G_end2end.py 08:08:46 ........................................................................ [ 66%] 08:13:06 ..................................... [100%] 08:18:12 109 passed in 607.78s (0:10:07) 08:18:13 pytest -q transportpce_tests/hybrid/test03_autonomous_reroute.py 08:19:00 ..................................................... [100%] 08:22:32 53 passed in 259.39s (0:04:19) 08:22:32 tests_hybrid: OK ✔ in 20 minutes 6.69 seconds 08:22:32 buildlighty: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 08:22:39 buildlighty: freeze> python -m pip freeze --all 08:22:40 buildlighty: bcrypt==4.3.0,certifi==2025.6.15,cffi==1.17.1,charset-normalizer==3.4.2,cryptography==45.0.4,dict2xml==1.7.6,idna==3.10,iniconfig==2.1.0,lxml==5.4.0,netconf-client==3.2.0,packaging==25.0,paramiko==3.5.1,pip==25.1.1,pluggy==1.6.0,psutil==7.0.0,pycparser==2.22,Pygments==2.19.2,PyNaCl==1.5.0,pytest==8.4.1,requests==2.32.4,setuptools==80.9.0,urllib3==2.5.0 08:22:40 buildlighty: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/lighty> ./build.sh 08:22:40 NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED 08:23:01 buildcontroller: OK (112.71=setup[8.20]+cmd[104.51] seconds) 08:23:01 testsPCE: OK (307.77=setup[63.61]+cmd[244.16] seconds) 08:23:01 sims: OK (10.47=setup[7.19]+cmd[3.28] seconds) 08:23:01 build_karaf_tests121: OK (53.85=setup[7.29]+cmd[46.56] seconds) 08:23:01 tests121: OK (1071.82=setup[7.69]+cmd[1064.13] seconds) 08:23:01 build_karaf_tests221: OK (54.77=setup[7.25]+cmd[47.52] seconds) 08:23:01 tests_tapi: OK (1572.26=setup[6.44]+cmd[1565.82] seconds) 08:23:01 tests221: OK (3073.42=setup[6.59]+cmd[3066.83] seconds) 08:23:01 build_karaf_tests71: OK (49.06=setup[13.94]+cmd[35.11] seconds) 08:23:01 tests71: OK (416.17=setup[6.69]+cmd[409.48] seconds) 08:23:01 build_karaf_tests190: OK (53.52=setup[9.30]+cmd[44.22] seconds) 08:23:01 tests190: FAIL code 1 (46.70=setup[8.59]+cmd[38.11] seconds) 08:23:01 build_karaf_tests_hybrid: OK (53.71=setup[8.44]+cmd[45.27] seconds) 08:23:01 tests_hybrid: OK (1206.69=setup[7.42]+cmd[1199.28] seconds) 08:23:01 buildlighty: OK (28.99=setup[7.43]+cmd[21.56] seconds) 08:23:01 docs: OK (26.80=setup[24.90]+cmd[1.89] seconds) 08:23:01 docs-linkcheck: OK (27.19=setup[24.54]+cmd[2.65] seconds) 08:23:01 checkbashisms: OK (3.26=setup[1.99]+cmd[0.01,0.06,1.20] seconds) 08:23:01 pre-commit: OK (49.03=setup[2.70]+cmd[0.01,0.01,37.48,8.83] seconds) 08:23:01 pylint: OK (30.07=setup[3.36]+cmd[26.71] seconds) 08:23:01 evaluation failed :( (7789.92 seconds) 08:23:01 + tox_status=255 08:23:01 + echo '---> Completed tox runs' 08:23:01 ---> Completed tox runs 08:23:01 + for i in .tox/*/log 08:23:01 ++ echo .tox/build_karaf_tests121/log 08:23:01 ++ awk -F/ '{print $2}' 08:23:01 + tox_env=build_karaf_tests121 08:23:01 + cp -r .tox/build_karaf_tests121/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests121 08:23:01 + for i in .tox/*/log 08:23:01 ++ echo .tox/build_karaf_tests190/log 08:23:01 ++ awk -F/ '{print $2}' 08:23:01 + tox_env=build_karaf_tests190 08:23:01 + cp -r .tox/build_karaf_tests190/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests190 08:23:01 + for i in .tox/*/log 08:23:01 ++ echo .tox/build_karaf_tests221/log 08:23:01 ++ awk -F/ '{print $2}' 08:23:01 + tox_env=build_karaf_tests221 08:23:01 + cp -r .tox/build_karaf_tests221/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests221 08:23:01 + for i in .tox/*/log 08:23:01 ++ echo .tox/build_karaf_tests71/log 08:23:01 ++ awk -F/ '{print $2}' 08:23:01 + tox_env=build_karaf_tests71 08:23:01 + cp -r .tox/build_karaf_tests71/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests71 08:23:01 + for i in .tox/*/log 08:23:01 ++ echo .tox/build_karaf_tests_hybrid/log 08:23:01 ++ awk -F/ '{print $2}' 08:23:01 + tox_env=build_karaf_tests_hybrid 08:23:01 + cp -r .tox/build_karaf_tests_hybrid/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests_hybrid 08:23:01 + for i in .tox/*/log 08:23:01 ++ echo .tox/buildcontroller/log 08:23:01 ++ awk -F/ '{print $2}' 08:23:01 + tox_env=buildcontroller 08:23:01 + cp -r .tox/buildcontroller/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/buildcontroller 08:23:01 + for i in .tox/*/log 08:23:01 ++ echo .tox/buildlighty/log 08:23:01 ++ awk -F/ '{print $2}' 08:23:01 + tox_env=buildlighty 08:23:01 + cp -r .tox/buildlighty/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/buildlighty 08:23:01 + for i in .tox/*/log 08:23:01 ++ echo .tox/checkbashisms/log 08:23:01 ++ awk -F/ '{print $2}' 08:23:01 + tox_env=checkbashisms 08:23:01 + cp -r .tox/checkbashisms/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/checkbashisms 08:23:01 + for i in .tox/*/log 08:23:01 ++ echo .tox/docs-linkcheck/log 08:23:01 ++ awk -F/ '{print $2}' 08:23:01 + tox_env=docs-linkcheck 08:23:01 + cp -r .tox/docs-linkcheck/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/docs-linkcheck 08:23:01 + for i in .tox/*/log 08:23:01 ++ echo .tox/docs/log 08:23:01 ++ awk -F/ '{print $2}' 08:23:01 + tox_env=docs 08:23:01 + cp -r .tox/docs/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/docs 08:23:01 + for i in .tox/*/log 08:23:01 ++ echo .tox/pre-commit/log 08:23:01 ++ awk -F/ '{print $2}' 08:23:01 + tox_env=pre-commit 08:23:01 + cp -r .tox/pre-commit/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/pre-commit 08:23:01 + for i in .tox/*/log 08:23:01 ++ echo .tox/pylint/log 08:23:01 ++ awk -F/ '{print $2}' 08:23:01 + tox_env=pylint 08:23:01 + cp -r .tox/pylint/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/pylint 08:23:01 + for i in .tox/*/log 08:23:01 ++ echo .tox/sims/log 08:23:01 ++ awk -F/ '{print $2}' 08:23:01 + tox_env=sims 08:23:01 + cp -r .tox/sims/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/sims 08:23:01 + for i in .tox/*/log 08:23:01 ++ echo .tox/tests121/log 08:23:01 ++ awk -F/ '{print $2}' 08:23:01 + tox_env=tests121 08:23:01 + cp -r .tox/tests121/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests121 08:23:01 + for i in .tox/*/log 08:23:01 ++ echo .tox/tests190/log 08:23:01 ++ awk -F/ '{print $2}' 08:23:01 + tox_env=tests190 08:23:01 + cp -r .tox/tests190/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests190 08:23:01 + for i in .tox/*/log 08:23:01 ++ echo .tox/tests221/log 08:23:01 ++ awk -F/ '{print $2}' 08:23:01 + tox_env=tests221 08:23:01 + cp -r .tox/tests221/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests221 08:23:01 + for i in .tox/*/log 08:23:01 ++ echo .tox/tests71/log 08:23:01 ++ awk -F/ '{print $2}' 08:23:01 + tox_env=tests71 08:23:01 + cp -r .tox/tests71/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests71 08:23:01 + for i in .tox/*/log 08:23:01 ++ echo .tox/testsPCE/log 08:23:01 ++ awk -F/ '{print $2}' 08:23:01 + tox_env=testsPCE 08:23:01 + cp -r .tox/testsPCE/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/testsPCE 08:23:01 + for i in .tox/*/log 08:23:01 ++ echo .tox/tests_hybrid/log 08:23:01 ++ awk -F/ '{print $2}' 08:23:01 + tox_env=tests_hybrid 08:23:01 + cp -r .tox/tests_hybrid/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests_hybrid 08:23:01 + for i in .tox/*/log 08:23:01 ++ echo .tox/tests_tapi/log 08:23:01 ++ awk -F/ '{print $2}' 08:23:01 + tox_env=tests_tapi 08:23:01 + cp -r .tox/tests_tapi/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests_tapi 08:23:01 + DOC_DIR=docs/_build/html 08:23:01 + [[ -d docs/_build/html ]] 08:23:01 + echo '---> Archiving generated docs' 08:23:01 ---> Archiving generated docs 08:23:01 + mv docs/_build/html /w/workspace/transportpce-tox-verify-transportpce-master/archives/docs 08:23:01 + echo '---> tox-run.sh ends' 08:23:01 ---> tox-run.sh ends 08:23:01 + test 255 -eq 0 08:23:01 + exit 255 08:23:01 ++ '[' 1 = 1 ']' 08:23:01 ++ '[' -x /usr/bin/clear_console ']' 08:23:01 ++ /usr/bin/clear_console -q 08:23:01 Build step 'Execute shell' marked build as failure 08:23:01 $ ssh-agent -k 08:23:01 unset SSH_AUTH_SOCK; 08:23:01 unset SSH_AGENT_PID; 08:23:01 echo Agent pid 1563 killed; 08:23:01 [ssh-agent] Stopped. 08:23:02 [PostBuildScript] - [INFO] Executing post build scripts. 08:23:02 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins16529416433572447236.sh 08:23:02 ---> sysstat.sh 08:23:02 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins4641250078477946398.sh 08:23:02 ---> package-listing.sh 08:23:02 ++ facter osfamily 08:23:02 ++ tr '[:upper:]' '[:lower:]' 08:23:02 + OS_FAMILY=debian 08:23:02 + workspace=/w/workspace/transportpce-tox-verify-transportpce-master 08:23:02 + START_PACKAGES=/tmp/packages_start.txt 08:23:02 + END_PACKAGES=/tmp/packages_end.txt 08:23:02 + DIFF_PACKAGES=/tmp/packages_diff.txt 08:23:02 + PACKAGES=/tmp/packages_start.txt 08:23:02 + '[' /w/workspace/transportpce-tox-verify-transportpce-master ']' 08:23:02 + PACKAGES=/tmp/packages_end.txt 08:23:02 + case "${OS_FAMILY}" in 08:23:02 + dpkg -l 08:23:02 + grep '^ii' 08:23:02 + '[' -f /tmp/packages_start.txt ']' 08:23:02 + '[' -f /tmp/packages_end.txt ']' 08:23:02 + diff /tmp/packages_start.txt /tmp/packages_end.txt 08:23:03 + '[' /w/workspace/transportpce-tox-verify-transportpce-master ']' 08:23:03 + mkdir -p /w/workspace/transportpce-tox-verify-transportpce-master/archives/ 08:23:03 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/transportpce-tox-verify-transportpce-master/archives/ 08:23:03 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins3383355767704411729.sh 08:23:03 ---> capture-instance-metadata.sh 08:23:03 Setup pyenv: 08:23:03 system 08:23:03 3.8.20 08:23:03 3.9.20 08:23:03 3.10.15 08:23:03 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 08:23:03 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-M8wM from file:/tmp/.os_lf_venv 08:23:05 lf-activate-venv(): INFO: Installing: lftools 08:23:18 lf-activate-venv(): INFO: Adding /tmp/venv-M8wM/bin to PATH 08:23:18 INFO: Running in OpenStack, capturing instance metadata 08:23:18 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins15097587120390130155.sh 08:23:18 provisioning config files... 08:23:18 Could not find credentials [logs] for transportpce-tox-verify-transportpce-master #3309 08:23:18 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/transportpce-tox-verify-transportpce-master@tmp/config4547116086102741244tmp 08:23:18 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[odl-logs-s3-cloudfront-index] 08:23:18 Run condition [Regular expression match] enabling perform for step [Provide Configuration files] 08:23:18 provisioning config files... 08:23:19 copy managed file [jenkins-s3-log-ship] to file:/home/jenkins/.aws/credentials 08:23:19 [EnvInject] - Injecting environment variables from a build step. 08:23:19 [EnvInject] - Injecting as environment variables the properties content 08:23:19 SERVER_ID=logs 08:23:19 08:23:19 [EnvInject] - Variables injected successfully. 08:23:19 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins6077207236769323293.sh 08:23:19 ---> create-netrc.sh 08:23:19 WARN: Log server credential not found. 08:23:19 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins11721762206462107370.sh 08:23:19 ---> python-tools-install.sh 08:23:19 Setup pyenv: 08:23:19 system 08:23:19 3.8.20 08:23:19 3.9.20 08:23:19 3.10.15 08:23:19 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 08:23:19 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-M8wM from file:/tmp/.os_lf_venv 08:23:21 lf-activate-venv(): INFO: Installing: lftools 08:23:34 lf-activate-venv(): INFO: Adding /tmp/venv-M8wM/bin to PATH 08:23:34 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins7478383275022810980.sh 08:23:34 ---> sudo-logs.sh 08:23:34 Archiving 'sudo' log.. 08:23:34 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins3817572022748990436.sh 08:23:34 ---> job-cost.sh 08:23:34 Setup pyenv: 08:23:34 system 08:23:34 3.8.20 08:23:34 3.9.20 08:23:34 3.10.15 08:23:34 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 08:23:34 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-M8wM from file:/tmp/.os_lf_venv 08:23:36 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 08:23:43 lf-activate-venv(): INFO: Adding /tmp/venv-M8wM/bin to PATH 08:23:43 INFO: No Stack... 08:23:44 INFO: Retrieving Pricing Info for: v3-standard-4 08:23:44 INFO: Archiving Costs 08:23:44 [transportpce-tox-verify-transportpce-master] $ /bin/bash -l /tmp/jenkins12150877048089490557.sh 08:23:44 ---> logs-deploy.sh 08:23:44 Setup pyenv: 08:23:44 system 08:23:44 3.8.20 08:23:44 3.9.20 08:23:44 3.10.15 08:23:44 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 08:23:44 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-M8wM from file:/tmp/.os_lf_venv 08:23:46 lf-activate-venv(): INFO: Installing: lftools 08:23:56 lf-activate-venv(): INFO: Adding /tmp/venv-M8wM/bin to PATH 08:23:56 WARNING: Nexus logging server not set 08:23:56 INFO: S3 path logs/releng/vex-yul-odl-jenkins-1/transportpce-tox-verify-transportpce-master/3309/ 08:23:56 INFO: archiving logs to S3 08:23:58 ---> uname -a: 08:23:58 Linux prd-ubuntu2204-docker-4c-16g-38734 5.15.0-131-generic #141-Ubuntu SMP Fri Jan 10 21:18:28 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux 08:23:58 08:23:58 08:23:58 ---> lscpu: 08:23:58 Architecture: x86_64 08:23:58 CPU op-mode(s): 32-bit, 64-bit 08:23:58 Address sizes: 40 bits physical, 48 bits virtual 08:23:58 Byte Order: Little Endian 08:23:58 CPU(s): 4 08:23:58 On-line CPU(s) list: 0-3 08:23:58 Vendor ID: AuthenticAMD 08:23:58 Model name: AMD EPYC-Rome Processor 08:23:58 CPU family: 23 08:23:58 Model: 49 08:23:58 Thread(s) per core: 1 08:23:58 Core(s) per socket: 1 08:23:58 Socket(s): 4 08:23:58 Stepping: 0 08:23:58 BogoMIPS: 5599.99 08:23:58 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr wbnoinvd arat npt nrip_save umip rdpid arch_capabilities 08:23:58 Virtualization: AMD-V 08:23:58 Hypervisor vendor: KVM 08:23:58 Virtualization type: full 08:23:58 L1d cache: 128 KiB (4 instances) 08:23:58 L1i cache: 128 KiB (4 instances) 08:23:58 L2 cache: 2 MiB (4 instances) 08:23:58 L3 cache: 64 MiB (4 instances) 08:23:58 NUMA node(s): 1 08:23:58 NUMA node0 CPU(s): 0-3 08:23:58 Vulnerability Gather data sampling: Not affected 08:23:58 Vulnerability Itlb multihit: Not affected 08:23:58 Vulnerability L1tf: Not affected 08:23:58 Vulnerability Mds: Not affected 08:23:58 Vulnerability Meltdown: Not affected 08:23:58 Vulnerability Mmio stale data: Not affected 08:23:58 Vulnerability Reg file data sampling: Not affected 08:23:58 Vulnerability Retbleed: Mitigation; untrained return thunk; SMT disabled 08:23:58 Vulnerability Spec rstack overflow: Mitigation; SMT disabled 08:23:58 Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp 08:23:58 Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization 08:23:58 Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected 08:23:58 Vulnerability Srbds: Not affected 08:23:58 Vulnerability Tsx async abort: Not affected 08:23:58 08:23:58 08:23:58 ---> nproc: 08:23:58 4 08:23:58 08:23:58 08:23:58 ---> df -h: 08:23:58 Filesystem Size Used Avail Use% Mounted on 08:23:58 tmpfs 1.6G 1.1M 1.6G 1% /run 08:23:58 /dev/vda1 78G 18G 60G 23% / 08:23:58 tmpfs 7.9G 0 7.9G 0% /dev/shm 08:23:58 tmpfs 5.0M 0 5.0M 0% /run/lock 08:23:58 /dev/vda15 105M 6.1M 99M 6% /boot/efi 08:23:58 tmpfs 1.6G 4.0K 1.6G 1% /run/user/1001 08:23:58 08:23:58 08:23:58 ---> free -m: 08:23:58 total used free shared buff/cache available 08:23:58 Mem: 15989 2274 5024 3 8690 13371 08:23:58 Swap: 1023 0 1023 08:23:58 08:23:58 08:23:58 ---> ip addr: 08:23:58 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 08:23:58 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 08:23:58 inet 127.0.0.1/8 scope host lo 08:23:58 valid_lft forever preferred_lft forever 08:23:58 inet6 ::1/128 scope host 08:23:58 valid_lft forever preferred_lft forever 08:23:58 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 08:23:58 link/ether fa:16:3e:ae:65:c8 brd ff:ff:ff:ff:ff:ff 08:23:58 altname enp0s3 08:23:58 inet 10.30.170.203/23 metric 100 brd 10.30.171.255 scope global dynamic ens3 08:23:58 valid_lft 78434sec preferred_lft 78434sec 08:23:58 inet6 fe80::f816:3eff:feae:65c8/64 scope link 08:23:58 valid_lft forever preferred_lft forever 08:23:58 3: docker0: mtu 1458 qdisc noqueue state DOWN group default 08:23:58 link/ether 02:42:22:98:20:36 brd ff:ff:ff:ff:ff:ff 08:23:58 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 08:23:58 valid_lft forever preferred_lft forever 08:23:58 08:23:58 08:23:58 ---> sar -b -r -n DEV: 08:23:58 Linux 5.15.0-131-generic (prd-ubuntu2204-docker-4c-16g-38734) 07/01/25 _x86_64_ (4 CPU) 08:23:58 08:23:58 06:11:15 LINUX RESTART (4 CPU) 08:23:58 08:23:58 06:20:12 tps rtps wtps dtps bread/s bwrtn/s bdscd/s 08:23:58 06:30:33 9.00 1.37 7.23 0.40 79.11 1106.62 926.66 08:23:58 06:40:42 8.21 0.50 7.37 0.33 9.75 219.09 1744.28 08:23:58 06:50:20 35.23 20.26 14.13 0.85 174.86 1360.32 2199.92 08:23:58 07:00:40 31.48 0.09 16.02 15.37 1.29 568.07 210440.47 08:23:58 07:10:49 14.43 0.00 13.77 0.66 0.00 246.21 724.93 08:23:58 07:20:42 6.02 0.00 5.79 0.23 0.00 106.16 230.35 08:23:58 07:30:49 5.15 0.00 4.92 0.23 0.00 95.69 472.29 08:23:58 07:40:38 4.97 0.00 4.78 0.19 0.01 116.71 216.87 08:23:58 07:50:31 36.00 18.94 16.25 0.82 159.08 1312.60 672.59 08:23:58 08:00:10 7.38 0.01 7.07 0.30 1.09 159.20 302.64 08:23:58 08:10:49 11.17 0.00 10.67 0.50 0.04 1130.66 657.71 08:23:58 08:20:45 6.12 0.01 5.93 0.17 0.43 141.54 352.04 08:23:58 Average: 14.55 3.34 9.51 1.70 34.87 549.73 18727.41 08:23:58 08:23:58 06:20:12 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 08:23:58 06:30:33 7269768 13721024 2206464 13.48 224968 6071692 2931412 16.83 2153148 6231760 248 08:23:58 06:40:42 2925588 9385484 6540348 39.95 226520 6078832 7230864 41.50 2199948 10531464 60 08:23:58 06:50:20 4625196 11412776 4513920 27.57 235676 6389868 5282368 30.32 2365648 8668084 140 08:23:58 07:00:40 6857748 13734832 2191864 13.39 242956 6466792 2939704 16.87 2394668 6405396 112 08:23:58 07:10:49 2886144 9768412 6156556 37.60 245452 6469432 6870960 39.44 2397536 10365448 288 08:23:58 07:20:42 3018260 9904592 6020392 36.77 246388 6472560 6803560 39.05 2398512 10224940 52 08:23:58 07:30:49 1373788 8263512 7660592 46.79 247048 6475256 8303016 47.66 2399252 11861192 120 08:23:58 07:40:38 1222104 8114276 7809776 47.70 247764 6476992 8513564 48.87 2399996 12014672 252 08:23:58 07:50:31 6508468 13732664 2192880 13.39 256540 6790300 2932276 16.83 2545508 6594360 132 08:23:58 08:00:10 2511668 9740416 6183744 37.77 257552 6793828 6870068 39.43 2548288 10553812 164 08:23:58 08:10:49 2297804 9806176 6117220 37.36 265324 7056372 6852208 39.33 2642824 10677256 88 08:23:58 08:20:45 1519812 9031132 6891740 42.09 265880 7058688 7591948 43.58 2655516 11439300 136 08:23:58 Average: 3584696 10551275 5373791 32.82 246839 6550051 6093496 34.98 2425070 9630640 149 08:23:58 08:23:58 06:20:12 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 08:23:58 06:30:33 lo 7.37 7.37 5.06 5.06 0.00 0.00 0.00 0.00 08:23:58 06:30:33 ens3 0.57 0.42 0.17 0.13 0.00 0.00 0.00 0.00 08:23:58 06:30:33 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:23:58 06:40:42 lo 14.76 14.76 8.01 8.01 0.00 0.00 0.00 0.00 08:23:58 06:40:42 ens3 0.71 0.50 0.15 0.12 0.00 0.00 0.00 0.00 08:23:58 06:40:42 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:23:58 06:50:20 lo 9.19 9.19 5.81 5.81 0.00 0.00 0.00 0.00 08:23:58 06:50:20 ens3 0.80 0.68 0.21 0.17 0.00 0.00 0.00 0.00 08:23:58 06:50:20 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:23:58 07:00:40 lo 15.33 15.33 7.32 7.32 0.00 0.00 0.00 0.00 08:23:58 07:00:40 ens3 1.07 0.97 0.26 0.23 0.00 0.00 0.00 0.00 08:23:58 07:00:40 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:23:58 07:10:49 lo 20.74 20.74 10.92 10.92 0.00 0.00 0.00 0.00 08:23:58 07:10:49 ens3 0.89 0.70 0.19 0.16 0.00 0.00 0.00 0.00 08:23:58 07:10:49 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:23:58 07:20:42 lo 23.21 23.21 8.81 8.81 0.00 0.00 0.00 0.00 08:23:58 07:20:42 ens3 0.72 0.52 0.18 0.14 0.00 0.00 0.00 0.00 08:23:58 07:20:42 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:23:58 07:30:49 lo 25.71 25.71 11.33 11.33 0.00 0.00 0.00 0.00 08:23:58 07:30:49 ens3 0.59 0.46 0.12 0.09 0.00 0.00 0.00 0.00 08:23:58 07:30:49 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:23:58 07:40:38 lo 18.19 18.19 10.40 10.40 0.00 0.00 0.00 0.00 08:23:58 07:40:38 ens3 0.71 0.52 0.16 0.12 0.00 0.00 0.00 0.00 08:23:58 07:40:38 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:23:58 07:50:31 lo 13.04 13.04 6.93 6.93 0.00 0.00 0.00 0.00 08:23:58 07:50:31 ens3 1.05 0.92 0.26 0.22 0.00 0.00 0.00 0.00 08:23:58 07:50:31 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:23:58 08:00:10 lo 22.86 22.86 8.19 8.19 0.00 0.00 0.00 0.00 08:23:58 08:00:10 ens3 0.60 0.46 0.11 0.09 0.00 0.00 0.00 0.00 08:23:58 08:00:10 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:23:58 08:10:49 lo 15.20 15.20 8.05 8.05 0.00 0.00 0.00 0.00 08:23:58 08:10:49 ens3 0.83 0.62 0.24 0.19 0.00 0.00 0.00 0.00 08:23:58 08:10:49 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:23:58 08:20:45 lo 20.09 20.09 8.18 8.18 0.00 0.00 0.00 0.00 08:23:58 08:20:45 ens3 1.02 0.61 0.29 0.21 0.00 0.00 0.00 0.00 08:23:58 08:20:45 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:23:58 Average: lo 17.11 17.11 8.25 8.25 0.00 0.00 0.00 0.00 08:23:58 Average: ens3 0.80 0.61 0.19 0.16 0.00 0.00 0.00 0.00 08:23:58 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:23:58 08:23:58 08:23:58 ---> sar -P ALL: 08:23:58 Linux 5.15.0-131-generic (prd-ubuntu2204-docker-4c-16g-38734) 07/01/25 _x86_64_ (4 CPU) 08:23:58 08:23:58 06:11:15 LINUX RESTART (4 CPU) 08:23:58 08:23:58 06:20:12 CPU %user %nice %system %iowait %steal %idle 08:23:58 06:30:33 all 9.38 0.00 0.71 0.07 0.07 89.77 08:23:58 06:30:33 0 9.92 0.00 0.69 0.07 0.07 89.25 08:23:58 06:30:33 1 8.10 0.00 0.74 0.04 0.08 91.04 08:23:58 06:30:33 2 9.90 0.00 0.65 0.11 0.07 89.27 08:23:58 06:30:33 3 9.61 0.00 0.75 0.05 0.07 89.51 08:23:58 06:40:42 all 14.12 0.00 0.59 0.04 0.07 85.17 08:23:58 06:40:42 0 14.56 0.00 0.53 0.07 0.07 84.76 08:23:58 06:40:42 1 14.30 0.00 0.59 0.05 0.08 84.99 08:23:58 06:40:42 2 14.56 0.00 0.61 0.01 0.08 84.74 08:23:58 06:40:42 3 13.06 0.00 0.62 0.03 0.07 86.21 08:23:58 06:50:20 all 18.65 0.00 0.78 0.13 0.07 80.37 08:23:58 06:50:20 0 19.01 0.00 0.81 0.17 0.07 79.94 08:23:58 06:50:20 1 19.12 0.00 0.72 0.18 0.07 79.91 08:23:58 06:50:20 2 17.64 0.00 0.74 0.11 0.07 81.44 08:23:58 06:50:20 3 18.82 0.00 0.85 0.08 0.07 80.18 08:23:58 07:00:40 all 23.96 0.00 0.96 0.14 0.08 74.87 08:23:58 07:00:40 0 23.80 0.00 0.87 0.11 0.08 75.14 08:23:58 07:00:40 1 23.63 0.00 1.05 0.17 0.08 75.07 08:23:58 07:00:40 2 23.52 0.00 0.96 0.24 0.08 75.20 08:23:58 07:00:40 3 24.87 0.00 0.96 0.02 0.08 74.06 08:23:58 07:10:49 all 23.63 0.00 0.91 0.07 0.08 75.32 08:23:58 07:10:49 0 24.56 0.00 0.84 0.03 0.08 74.49 08:23:58 07:10:49 1 23.16 0.00 1.03 0.04 0.08 75.68 08:23:58 07:10:49 2 22.48 0.00 0.84 0.11 0.08 76.49 08:23:58 07:10:49 3 24.31 0.00 0.93 0.08 0.08 74.60 08:23:58 07:20:42 all 7.78 0.00 0.42 0.03 0.07 91.70 08:23:58 07:20:42 0 8.28 0.00 0.47 0.03 0.07 91.15 08:23:58 07:20:42 1 7.51 0.00 0.41 0.02 0.07 92.00 08:23:58 07:20:42 2 7.70 0.00 0.44 0.02 0.07 91.77 08:23:58 07:20:42 3 7.62 0.00 0.38 0.06 0.07 91.88 08:23:58 07:30:49 all 10.08 0.00 0.56 0.03 0.07 89.25 08:23:58 07:30:49 0 10.02 0.00 0.54 0.04 0.07 89.33 08:23:58 07:30:49 1 10.25 0.00 0.59 0.04 0.07 89.05 08:23:58 07:30:49 2 10.10 0.00 0.54 0.00 0.08 89.28 08:23:58 07:30:49 3 9.96 0.00 0.58 0.04 0.07 89.35 08:23:58 07:40:38 all 9.59 0.00 0.62 0.03 0.07 89.69 08:23:58 07:40:38 0 9.78 0.00 0.61 0.09 0.07 89.45 08:23:58 07:40:38 1 9.25 0.00 0.65 0.01 0.07 90.02 08:23:58 07:40:38 2 9.28 0.00 0.57 0.01 0.07 90.07 08:23:58 07:40:38 3 10.04 0.00 0.64 0.03 0.07 89.22 08:23:58 07:50:31 all 23.40 0.00 0.96 0.12 0.08 75.43 08:23:58 07:50:31 0 23.13 0.00 0.94 0.12 0.08 75.74 08:23:58 07:50:31 1 23.18 0.00 0.95 0.04 0.08 75.74 08:23:58 07:50:31 2 24.09 0.00 0.89 0.07 0.08 74.88 08:23:58 07:50:31 3 23.22 0.00 1.08 0.24 0.08 75.37 08:23:58 08:00:10 all 13.06 0.00 0.61 0.06 0.07 86.20 08:23:58 08:00:10 0 12.84 0.00 0.60 0.04 0.07 86.45 08:23:58 08:00:10 1 12.88 0.00 0.62 0.10 0.07 86.33 08:23:58 08:00:10 2 13.47 0.00 0.62 0.08 0.08 85.75 08:23:58 08:00:10 3 13.05 0.00 0.59 0.02 0.07 86.27 08:23:58 08:10:49 all 12.71 0.00 0.60 0.06 0.07 86.55 08:23:58 08:10:49 0 12.38 0.00 0.71 0.06 0.06 86.79 08:23:58 08:10:49 1 12.87 0.00 0.55 0.03 0.07 86.48 08:23:58 08:10:49 2 11.83 0.00 0.59 0.10 0.07 87.41 08:23:58 08:10:49 3 13.78 0.00 0.56 0.05 0.07 85.54 08:23:58 08:23:58 08:10:49 CPU %user %nice %system %iowait %steal %idle 08:23:58 08:20:45 all 8.81 0.00 0.54 0.04 0.07 90.55 08:23:58 08:20:45 0 8.74 0.00 0.52 0.10 0.07 90.58 08:23:58 08:20:45 1 9.12 0.00 0.53 0.01 0.08 90.26 08:23:58 08:20:45 2 8.80 0.00 0.57 0.01 0.07 90.55 08:23:58 08:20:45 3 8.56 0.00 0.54 0.03 0.07 90.80 08:23:58 Average: all 14.60 0.00 0.69 0.07 0.07 84.57 08:23:58 Average: 0 14.76 0.00 0.68 0.08 0.07 84.42 08:23:58 Average: 1 14.45 0.00 0.70 0.06 0.08 84.71 08:23:58 Average: 2 14.45 0.00 0.67 0.07 0.08 84.73 08:23:58 Average: 3 14.75 0.00 0.71 0.06 0.07 84.41 08:23:58 08:23:58 08:23:58