17:45:33 Triggered by Gerrit: https://git.opendaylight.org/gerrit/c/transportpce/+/113937 17:45:33 Running as SYSTEM 17:45:33 [EnvInject] - Loading node environment variables. 17:45:33 Building remotely on prd-ubuntu2004-docker-4c-16g-43174 (ubuntu2004-docker-4c-16g) in workspace /w/workspace/transportpce-tox-verify-transportpce-master 17:45:34 [ssh-agent] Looking for ssh-agent implementation... 17:45:34 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 17:45:34 $ ssh-agent 17:45:34 SSH_AUTH_SOCK=/tmp/ssh-7iVjufiEbOsf/agent.12355 17:45:34 SSH_AGENT_PID=12357 17:45:34 [ssh-agent] Started. 17:45:34 Running ssh-add (command line suppressed) 17:45:34 Identity added: /w/workspace/transportpce-tox-verify-transportpce-master@tmp/private_key_8477390528873627628.key (/w/workspace/transportpce-tox-verify-transportpce-master@tmp/private_key_8477390528873627628.key) 17:45:34 [ssh-agent] Using credentials jenkins (jenkins-ssh) 17:45:34 The recommended git tool is: NONE 17:45:36 using credential jenkins-ssh 17:45:36 Wiping out workspace first. 17:45:36 Cloning the remote Git repository 17:45:36 Cloning repository git://devvexx.opendaylight.org/mirror/transportpce 17:45:36 > git init /w/workspace/transportpce-tox-verify-transportpce-master # timeout=10 17:45:36 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/transportpce 17:45:36 > git --version # timeout=10 17:45:36 > git --version # 'git version 2.25.1' 17:45:36 using GIT_SSH to set credentials jenkins-ssh 17:45:36 Verifying host key using known hosts file 17:45:36 You're using 'Known hosts file' strategy to verify ssh host keys, but your known_hosts file does not exist, please go to 'Manage Jenkins' -> 'Security' -> 'Git Host Key Verification Configuration' and configure host key verification. 17:45:37 > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/transportpce +refs/heads/*:refs/remotes/origin/* # timeout=10 17:45:39 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/transportpce # timeout=10 17:45:39 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 17:45:40 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/transportpce # timeout=10 17:45:40 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/transportpce 17:45:40 using GIT_SSH to set credentials jenkins-ssh 17:45:40 Verifying host key using known hosts file 17:45:40 You're using 'Known hosts file' strategy to verify ssh host keys, but your known_hosts file does not exist, please go to 'Manage Jenkins' -> 'Security' -> 'Git Host Key Verification Configuration' and configure host key verification. 17:45:40 > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/transportpce refs/changes/37/113937/8 # timeout=10 17:45:40 > git rev-parse 8ae400c60a03aa6340efe0fce7091d53d9d4ef1b^{commit} # timeout=10 17:45:40 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script 17:45:40 Checking out Revision 8ae400c60a03aa6340efe0fce7091d53d9d4ef1b (refs/changes/37/113937/8) 17:45:40 > git config core.sparsecheckout # timeout=10 17:45:40 > git checkout -f 8ae400c60a03aa6340efe0fce7091d53d9d4ef1b # timeout=10 17:45:40 Commit message: "Add Func Test for Topology extension" 17:45:40 > git rev-parse FETCH_HEAD^{commit} # timeout=10 17:45:41 > git rev-list --no-walk 6cd2c3f7ccabd32092540720d8a4e05a634c84ea # timeout=10 17:45:41 > git remote # timeout=10 17:45:41 > git submodule init # timeout=10 17:45:41 > git submodule sync # timeout=10 17:45:41 > git config --get remote.origin.url # timeout=10 17:45:41 > git submodule init # timeout=10 17:45:41 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 17:45:41 ERROR: No submodules found. 17:45:44 provisioning config files... 17:45:44 copy managed file [npmrc] to file:/home/jenkins/.npmrc 17:45:44 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 17:45:44 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins2500088001157820362.sh 17:45:44 ---> python-tools-install.sh 17:45:44 Setup pyenv: 17:45:44 * system (set by /opt/pyenv/version) 17:45:44 * 3.8.13 (set by /opt/pyenv/version) 17:45:44 * 3.9.13 (set by /opt/pyenv/version) 17:45:44 * 3.10.13 (set by /opt/pyenv/version) 17:45:44 * 3.11.7 (set by /opt/pyenv/version) 17:45:49 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-lNH7 17:45:49 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 17:45:52 lf-activate-venv(): INFO: Installing: lftools 17:46:21 lf-activate-venv(): INFO: Adding /tmp/venv-lNH7/bin to PATH 17:46:21 Generating Requirements File 17:46:40 Python 3.11.7 17:46:41 pip 24.2 from /tmp/venv-lNH7/lib/python3.11/site-packages/pip (python 3.11) 17:46:41 appdirs==1.4.4 17:46:41 argcomplete==3.5.1 17:46:41 aspy.yaml==1.3.0 17:46:41 attrs==24.2.0 17:46:41 autopage==0.5.2 17:46:41 beautifulsoup4==4.12.3 17:46:41 boto3==1.35.41 17:46:41 botocore==1.35.41 17:46:41 bs4==0.0.2 17:46:41 cachetools==5.5.0 17:46:41 certifi==2024.8.30 17:46:41 cffi==1.17.1 17:46:41 cfgv==3.4.0 17:46:41 chardet==5.2.0 17:46:41 charset-normalizer==3.4.0 17:46:41 click==8.1.7 17:46:41 cliff==4.7.0 17:46:41 cmd2==2.4.3 17:46:41 cryptography==3.3.2 17:46:41 debtcollector==3.0.0 17:46:41 decorator==5.1.1 17:46:41 defusedxml==0.7.1 17:46:41 Deprecated==1.2.14 17:46:41 distlib==0.3.9 17:46:41 dnspython==2.7.0 17:46:41 docker==4.2.2 17:46:41 dogpile.cache==1.3.3 17:46:41 durationpy==0.9 17:46:41 email_validator==2.2.0 17:46:41 filelock==3.16.1 17:46:41 future==1.0.0 17:46:41 gitdb==4.0.11 17:46:41 GitPython==3.1.43 17:46:41 google-auth==2.35.0 17:46:41 httplib2==0.22.0 17:46:41 identify==2.6.1 17:46:41 idna==3.10 17:46:41 importlib-resources==1.5.0 17:46:41 iso8601==2.1.0 17:46:41 Jinja2==3.1.4 17:46:41 jmespath==1.0.1 17:46:41 jsonpatch==1.33 17:46:41 jsonpointer==3.0.0 17:46:41 jsonschema==4.23.0 17:46:41 jsonschema-specifications==2024.10.1 17:46:41 keystoneauth1==5.8.0 17:46:41 kubernetes==31.0.0 17:46:41 lftools==0.37.10 17:46:41 lxml==5.3.0 17:46:41 MarkupSafe==3.0.1 17:46:41 msgpack==1.1.0 17:46:41 multi_key_dict==2.0.3 17:46:41 munch==4.0.0 17:46:41 netaddr==1.3.0 17:46:41 netifaces==0.11.0 17:46:41 niet==1.4.2 17:46:41 nodeenv==1.9.1 17:46:41 oauth2client==4.1.3 17:46:41 oauthlib==3.2.2 17:46:41 openstacksdk==4.0.0 17:46:41 os-client-config==2.1.0 17:46:41 os-service-types==1.7.0 17:46:41 osc-lib==3.1.0 17:46:41 oslo.config==9.6.0 17:46:41 oslo.context==5.6.0 17:46:41 oslo.i18n==6.4.0 17:46:41 oslo.log==6.1.2 17:46:41 oslo.serialization==5.5.0 17:46:41 oslo.utils==7.3.0 17:46:41 packaging==24.1 17:46:41 pbr==6.1.0 17:46:41 platformdirs==4.3.6 17:46:41 prettytable==3.11.0 17:46:41 pyasn1==0.6.1 17:46:41 pyasn1_modules==0.4.1 17:46:41 pycparser==2.22 17:46:41 pygerrit2==2.0.15 17:46:41 PyGithub==2.4.0 17:46:41 PyJWT==2.9.0 17:46:41 PyNaCl==1.5.0 17:46:41 pyparsing==2.4.7 17:46:41 pyperclip==1.9.0 17:46:41 pyrsistent==0.20.0 17:46:41 python-cinderclient==9.6.0 17:46:41 python-dateutil==2.9.0.post0 17:46:41 python-heatclient==4.0.0 17:46:41 python-jenkins==1.8.2 17:46:41 python-keystoneclient==5.5.0 17:46:41 python-magnumclient==4.7.0 17:46:41 python-openstackclient==7.1.3 17:46:41 python-swiftclient==4.6.0 17:46:41 PyYAML==6.0.2 17:46:41 referencing==0.35.1 17:46:41 requests==2.32.3 17:46:41 requests-oauthlib==2.0.0 17:46:41 requestsexceptions==1.4.0 17:46:41 rfc3986==2.0.0 17:46:41 rpds-py==0.20.0 17:46:41 rsa==4.9 17:46:41 ruamel.yaml==0.18.6 17:46:41 ruamel.yaml.clib==0.2.8 17:46:41 s3transfer==0.10.3 17:46:41 simplejson==3.19.3 17:46:41 six==1.16.0 17:46:41 smmap==5.0.1 17:46:41 soupsieve==2.6 17:46:41 stevedore==5.3.0 17:46:41 tabulate==0.9.0 17:46:41 toml==0.10.2 17:46:41 tomlkit==0.13.2 17:46:41 tqdm==4.66.5 17:46:41 typing_extensions==4.12.2 17:46:41 tzdata==2024.2 17:46:41 urllib3==1.26.20 17:46:41 virtualenv==20.26.6 17:46:41 wcwidth==0.2.13 17:46:41 websocket-client==1.8.0 17:46:41 wrapt==1.16.0 17:46:41 xdg==6.0.0 17:46:41 xmltodict==0.14.2 17:46:41 yq==3.4.3 17:46:41 [EnvInject] - Injecting environment variables from a build step. 17:46:41 [EnvInject] - Injecting as environment variables the properties content 17:46:41 PYTHON=python3 17:46:41 17:46:41 [EnvInject] - Variables injected successfully. 17:46:41 [transportpce-tox-verify-transportpce-master] $ /bin/bash -l /tmp/jenkins12897823211719285706.sh 17:46:41 ---> tox-install.sh 17:46:41 + source /home/jenkins/lf-env.sh 17:46:41 + lf-activate-venv --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 17:46:41 ++ mktemp -d /tmp/venv-XXXX 17:46:41 + lf_venv=/tmp/venv-eelT 17:46:41 + local venv_file=/tmp/.os_lf_venv 17:46:41 + local python=python3 17:46:41 + local options 17:46:41 + local set_path=true 17:46:41 + local install_args= 17:46:41 ++ getopt -o np:v: -l no-path,system-site-packages,python:,venv-file: -n lf-activate-venv -- --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 17:46:41 + options=' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 17:46:41 + eval set -- ' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 17:46:41 ++ set -- --venv-file /tmp/.toxenv -- tox virtualenv urllib3~=1.26.15 17:46:41 + true 17:46:41 + case $1 in 17:46:41 + venv_file=/tmp/.toxenv 17:46:41 + shift 2 17:46:41 + true 17:46:41 + case $1 in 17:46:41 + shift 17:46:41 + break 17:46:41 + case $python in 17:46:41 + local pkg_list= 17:46:41 + [[ -d /opt/pyenv ]] 17:46:41 + echo 'Setup pyenv:' 17:46:41 Setup pyenv: 17:46:41 + export PYENV_ROOT=/opt/pyenv 17:46:41 + PYENV_ROOT=/opt/pyenv 17:46:41 + export PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 17:46:41 + PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 17:46:41 + pyenv versions 17:46:41 system 17:46:41 3.8.13 17:46:41 3.9.13 17:46:41 3.10.13 17:46:41 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 17:46:41 + command -v pyenv 17:46:41 ++ pyenv init - --no-rehash 17:46:41 + eval 'PATH="$(bash --norc -ec '\''IFS=:; paths=($PATH); 17:46:41 for i in ${!paths[@]}; do 17:46:41 if [[ ${paths[i]} == "'\'''\''/opt/pyenv/shims'\'''\''" ]]; then unset '\''\'\'''\''paths[i]'\''\'\'''\''; 17:46:41 fi; done; 17:46:41 echo "${paths[*]}"'\'')" 17:46:41 export PATH="/opt/pyenv/shims:${PATH}" 17:46:41 export PYENV_SHELL=bash 17:46:41 source '\''/opt/pyenv/libexec/../completions/pyenv.bash'\'' 17:46:41 pyenv() { 17:46:41 local command 17:46:41 command="${1:-}" 17:46:41 if [ "$#" -gt 0 ]; then 17:46:41 shift 17:46:41 fi 17:46:41 17:46:41 case "$command" in 17:46:41 rehash|shell) 17:46:41 eval "$(pyenv "sh-$command" "$@")" 17:46:41 ;; 17:46:41 *) 17:46:41 command pyenv "$command" "$@" 17:46:41 ;; 17:46:41 esac 17:46:41 }' 17:46:41 +++ bash --norc -ec 'IFS=:; paths=($PATH); 17:46:41 for i in ${!paths[@]}; do 17:46:41 if [[ ${paths[i]} == "/opt/pyenv/shims" ]]; then unset '\''paths[i]'\''; 17:46:41 fi; done; 17:46:41 echo "${paths[*]}"' 17:46:41 ++ PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 17:46:41 ++ export PATH=/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 17:46:41 ++ PATH=/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 17:46:41 ++ export PYENV_SHELL=bash 17:46:41 ++ PYENV_SHELL=bash 17:46:41 ++ source /opt/pyenv/libexec/../completions/pyenv.bash 17:46:41 +++ complete -F _pyenv pyenv 17:46:41 ++ lf-pyver python3 17:46:41 ++ local py_version_xy=python3 17:46:41 ++ local py_version_xyz= 17:46:41 ++ pyenv versions 17:46:41 ++ local command 17:46:41 ++ command=versions 17:46:41 ++ '[' 1 -gt 0 ']' 17:46:41 ++ sed 's/^[ *]* //' 17:46:41 ++ shift 17:46:41 ++ case "$command" in 17:46:41 ++ command pyenv versions 17:46:41 ++ pyenv versions 17:46:41 ++ awk '{ print $1 }' 17:46:41 ++ grep -E '^[0-9.]*[0-9]$' 17:46:41 ++ [[ ! -s /tmp/.pyenv_versions ]] 17:46:41 +++ grep '^3' /tmp/.pyenv_versions 17:46:41 +++ sort -V 17:46:41 +++ tail -n 1 17:46:41 ++ py_version_xyz=3.11.7 17:46:41 ++ [[ -z 3.11.7 ]] 17:46:41 ++ echo 3.11.7 17:46:41 ++ return 0 17:46:41 + pyenv local 3.11.7 17:46:41 + local command 17:46:41 + command=local 17:46:41 + '[' 2 -gt 0 ']' 17:46:41 + shift 17:46:41 + case "$command" in 17:46:41 + command pyenv local 3.11.7 17:46:41 + pyenv local 3.11.7 17:46:41 + for arg in "$@" 17:46:41 + case $arg in 17:46:41 + pkg_list+='tox ' 17:46:41 + for arg in "$@" 17:46:41 + case $arg in 17:46:41 + pkg_list+='virtualenv ' 17:46:41 + for arg in "$@" 17:46:41 + case $arg in 17:46:41 + pkg_list+='urllib3~=1.26.15 ' 17:46:41 + [[ -f /tmp/.toxenv ]] 17:46:41 + [[ ! -f /tmp/.toxenv ]] 17:46:41 + [[ -n '' ]] 17:46:41 + python3 -m venv /tmp/venv-eelT 17:46:45 + echo 'lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-eelT' 17:46:45 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-eelT 17:46:45 + echo /tmp/venv-eelT 17:46:45 + echo 'lf-activate-venv(): INFO: Save venv in file: /tmp/.toxenv' 17:46:45 lf-activate-venv(): INFO: Save venv in file: /tmp/.toxenv 17:46:45 + /tmp/venv-eelT/bin/python3 -m pip install --upgrade --quiet pip virtualenv 17:46:48 + [[ -z tox virtualenv urllib3~=1.26.15 ]] 17:46:48 + echo 'lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 ' 17:46:48 lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 17:46:48 + /tmp/venv-eelT/bin/python3 -m pip install --upgrade --quiet --upgrade-strategy eager tox virtualenv urllib3~=1.26.15 17:46:50 + type python3 17:46:50 + true 17:46:50 + echo 'lf-activate-venv(): INFO: Adding /tmp/venv-eelT/bin to PATH' 17:46:50 lf-activate-venv(): INFO: Adding /tmp/venv-eelT/bin to PATH 17:46:50 + PATH=/tmp/venv-eelT/bin:/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 17:46:50 + return 0 17:46:50 + python3 --version 17:46:50 Python 3.11.7 17:46:50 + python3 -m pip --version 17:46:50 pip 24.2 from /tmp/venv-eelT/lib/python3.11/site-packages/pip (python 3.11) 17:46:50 + python3 -m pip freeze 17:46:50 cachetools==5.5.0 17:46:50 chardet==5.2.0 17:46:50 colorama==0.4.6 17:46:50 distlib==0.3.9 17:46:50 filelock==3.16.1 17:46:50 packaging==24.1 17:46:50 platformdirs==4.3.6 17:46:50 pluggy==1.5.0 17:46:50 pyproject-api==1.8.0 17:46:50 tox==4.22.0 17:46:50 urllib3==1.26.20 17:46:50 virtualenv==20.26.6 17:46:50 [transportpce-tox-verify-transportpce-master] $ /bin/sh -xe /tmp/jenkins10348494177756983740.sh 17:46:50 [EnvInject] - Injecting environment variables from a build step. 17:46:50 [EnvInject] - Injecting as environment variables the properties content 17:46:50 PARALLEL=True 17:46:50 17:46:50 [EnvInject] - Variables injected successfully. 17:46:50 [transportpce-tox-verify-transportpce-master] $ /bin/bash -l /tmp/jenkins10681950074106354870.sh 17:46:50 ---> tox-run.sh 17:46:50 + PATH=/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 17:46:50 + ARCHIVE_TOX_DIR=/w/workspace/transportpce-tox-verify-transportpce-master/archives/tox 17:46:50 + ARCHIVE_DOC_DIR=/w/workspace/transportpce-tox-verify-transportpce-master/archives/docs 17:46:50 + mkdir -p /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox 17:46:50 + cd /w/workspace/transportpce-tox-verify-transportpce-master/. 17:46:50 + source /home/jenkins/lf-env.sh 17:46:50 + lf-activate-venv --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 17:46:50 ++ mktemp -d /tmp/venv-XXXX 17:46:50 + lf_venv=/tmp/venv-amre 17:46:50 + local venv_file=/tmp/.os_lf_venv 17:46:50 + local python=python3 17:46:50 + local options 17:46:50 + local set_path=true 17:46:50 + local install_args= 17:46:50 ++ getopt -o np:v: -l no-path,system-site-packages,python:,venv-file: -n lf-activate-venv -- --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 17:46:50 + options=' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 17:46:50 + eval set -- ' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 17:46:50 ++ set -- --venv-file /tmp/.toxenv -- tox virtualenv urllib3~=1.26.15 17:46:50 + true 17:46:50 + case $1 in 17:46:50 + venv_file=/tmp/.toxenv 17:46:50 + shift 2 17:46:50 + true 17:46:50 + case $1 in 17:46:50 + shift 17:46:50 + break 17:46:50 + case $python in 17:46:50 + local pkg_list= 17:46:50 + [[ -d /opt/pyenv ]] 17:46:50 + echo 'Setup pyenv:' 17:46:50 Setup pyenv: 17:46:50 + export PYENV_ROOT=/opt/pyenv 17:46:50 + PYENV_ROOT=/opt/pyenv 17:46:50 + export PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 17:46:50 + PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 17:46:50 + pyenv versions 17:46:51 system 17:46:51 3.8.13 17:46:51 3.9.13 17:46:51 3.10.13 17:46:51 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 17:46:51 + command -v pyenv 17:46:51 ++ pyenv init - --no-rehash 17:46:51 + eval 'PATH="$(bash --norc -ec '\''IFS=:; paths=($PATH); 17:46:51 for i in ${!paths[@]}; do 17:46:51 if [[ ${paths[i]} == "'\'''\''/opt/pyenv/shims'\'''\''" ]]; then unset '\''\'\'''\''paths[i]'\''\'\'''\''; 17:46:51 fi; done; 17:46:51 echo "${paths[*]}"'\'')" 17:46:51 export PATH="/opt/pyenv/shims:${PATH}" 17:46:51 export PYENV_SHELL=bash 17:46:51 source '\''/opt/pyenv/libexec/../completions/pyenv.bash'\'' 17:46:51 pyenv() { 17:46:51 local command 17:46:51 command="${1:-}" 17:46:51 if [ "$#" -gt 0 ]; then 17:46:51 shift 17:46:51 fi 17:46:51 17:46:51 case "$command" in 17:46:51 rehash|shell) 17:46:51 eval "$(pyenv "sh-$command" "$@")" 17:46:51 ;; 17:46:51 *) 17:46:51 command pyenv "$command" "$@" 17:46:51 ;; 17:46:51 esac 17:46:51 }' 17:46:51 +++ bash --norc -ec 'IFS=:; paths=($PATH); 17:46:51 for i in ${!paths[@]}; do 17:46:51 if [[ ${paths[i]} == "/opt/pyenv/shims" ]]; then unset '\''paths[i]'\''; 17:46:51 fi; done; 17:46:51 echo "${paths[*]}"' 17:46:51 ++ PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 17:46:51 ++ export PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 17:46:51 ++ PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 17:46:51 ++ export PYENV_SHELL=bash 17:46:51 ++ PYENV_SHELL=bash 17:46:51 ++ source /opt/pyenv/libexec/../completions/pyenv.bash 17:46:51 +++ complete -F _pyenv pyenv 17:46:51 ++ lf-pyver python3 17:46:51 ++ local py_version_xy=python3 17:46:51 ++ local py_version_xyz= 17:46:51 ++ pyenv versions 17:46:51 ++ local command 17:46:51 ++ sed 's/^[ *]* //' 17:46:51 ++ command=versions 17:46:51 ++ '[' 1 -gt 0 ']' 17:46:51 ++ awk '{ print $1 }' 17:46:51 ++ shift 17:46:51 ++ grep -E '^[0-9.]*[0-9]$' 17:46:51 ++ case "$command" in 17:46:51 ++ command pyenv versions 17:46:51 ++ pyenv versions 17:46:51 ++ [[ ! -s /tmp/.pyenv_versions ]] 17:46:51 +++ grep '^3' /tmp/.pyenv_versions 17:46:51 +++ sort -V 17:46:51 +++ tail -n 1 17:46:51 ++ py_version_xyz=3.11.7 17:46:51 ++ [[ -z 3.11.7 ]] 17:46:51 ++ echo 3.11.7 17:46:51 ++ return 0 17:46:51 + pyenv local 3.11.7 17:46:51 + local command 17:46:51 + command=local 17:46:51 + '[' 2 -gt 0 ']' 17:46:51 + shift 17:46:51 + case "$command" in 17:46:51 + command pyenv local 3.11.7 17:46:51 + pyenv local 3.11.7 17:46:51 + for arg in "$@" 17:46:51 + case $arg in 17:46:51 + pkg_list+='tox ' 17:46:51 + for arg in "$@" 17:46:51 + case $arg in 17:46:51 + pkg_list+='virtualenv ' 17:46:51 + for arg in "$@" 17:46:51 + case $arg in 17:46:51 + pkg_list+='urllib3~=1.26.15 ' 17:46:51 + [[ -f /tmp/.toxenv ]] 17:46:51 ++ cat /tmp/.toxenv 17:46:51 + lf_venv=/tmp/venv-eelT 17:46:51 + echo 'lf-activate-venv(): INFO: Reuse venv:/tmp/venv-eelT from' file:/tmp/.toxenv 17:46:51 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-eelT from file:/tmp/.toxenv 17:46:51 + /tmp/venv-eelT/bin/python3 -m pip install --upgrade --quiet pip virtualenv 17:46:52 + [[ -z tox virtualenv urllib3~=1.26.15 ]] 17:46:52 + echo 'lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 ' 17:46:52 lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 17:46:52 + /tmp/venv-eelT/bin/python3 -m pip install --upgrade --quiet --upgrade-strategy eager tox virtualenv urllib3~=1.26.15 17:46:53 + type python3 17:46:53 + true 17:46:53 + echo 'lf-activate-venv(): INFO: Adding /tmp/venv-eelT/bin to PATH' 17:46:53 lf-activate-venv(): INFO: Adding /tmp/venv-eelT/bin to PATH 17:46:53 + PATH=/tmp/venv-eelT/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 17:46:53 + return 0 17:46:53 + [[ -d /opt/pyenv ]] 17:46:53 + echo '---> Setting up pyenv' 17:46:53 ---> Setting up pyenv 17:46:53 + export PYENV_ROOT=/opt/pyenv 17:46:53 + PYENV_ROOT=/opt/pyenv 17:46:53 + export PATH=/opt/pyenv/bin:/tmp/venv-eelT/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 17:46:53 + PATH=/opt/pyenv/bin:/tmp/venv-eelT/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 17:46:53 ++ pwd 17:46:53 + PYTHONPATH=/w/workspace/transportpce-tox-verify-transportpce-master 17:46:53 + export PYTHONPATH 17:46:53 + export TOX_TESTENV_PASSENV=PYTHONPATH 17:46:53 + TOX_TESTENV_PASSENV=PYTHONPATH 17:46:53 + tox --version 17:46:54 4.22.0 from /tmp/venv-eelT/lib/python3.11/site-packages/tox/__init__.py 17:46:54 + PARALLEL=True 17:46:54 + TOX_OPTIONS_LIST= 17:46:54 + [[ -n '' ]] 17:46:54 + case ${PARALLEL,,} in 17:46:54 + TOX_OPTIONS_LIST=' --parallel auto --parallel-live' 17:46:54 + tox --parallel auto --parallel-live 17:46:54 + tee -a /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tox.log 17:46:55 docs: install_deps> python -I -m pip install -r docs/requirements.txt 17:46:55 checkbashisms: freeze> python -m pip freeze --all 17:46:55 buildcontroller: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 17:46:55 docs-linkcheck: install_deps> python -I -m pip install -r docs/requirements.txt 17:46:56 checkbashisms: pip==24.2,setuptools==75.1.0,wheel==0.44.0 17:46:56 checkbashisms: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./fixCIcentOS8reposMirrors.sh 17:46:56 checkbashisms: commands[1] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sh -c 'command checkbashisms>/dev/null || sudo yum install -y devscripts-checkbashisms || sudo yum install -y devscripts-minimal || sudo yum install -y devscripts || sudo yum install -y https://archives.fedoraproject.org/pub/archive/fedora/linux/releases/31/Everything/x86_64/os/Packages/d/devscripts-checkbashisms-2.19.6-2.fc31.x86_64.rpm || (echo "checkbashisms command not found - please install it (e.g. sudo apt-get install devscripts | yum install devscripts-minimal )" >&2 && exit 1)' 17:46:56 checkbashisms: commands[2] /w/workspace/transportpce-tox-verify-transportpce-master/tests> find . -not -path '*/\.*' -name '*.sh' -exec checkbashisms -f '{}' + 17:46:57 script ./reflectwarn.sh does not appear to have a #! interpreter line; 17:46:57 you may get strange results 17:46:57 checkbashisms: OK ✔ in 3.12 seconds 17:46:57 pre-commit: install_deps> python -I -m pip install pre-commit 17:47:00 pre-commit: freeze> python -m pip freeze --all 17:47:00 pre-commit: cfgv==3.4.0,distlib==0.3.9,filelock==3.16.1,identify==2.6.1,nodeenv==1.9.1,pip==24.2,platformdirs==4.3.6,pre_commit==4.0.1,PyYAML==6.0.2,setuptools==75.1.0,virtualenv==20.26.6,wheel==0.44.0 17:47:00 pre-commit: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./fixCIcentOS8reposMirrors.sh 17:47:00 pre-commit: commands[1] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sh -c 'which cpan || sudo yum install -y perl-CPAN || (echo "cpan command not found - please install it (e.g. sudo apt-get install perl-modules | yum install perl-CPAN )" >&2 && exit 1)' 17:47:00 /usr/bin/cpan 17:47:00 pre-commit: commands[2] /w/workspace/transportpce-tox-verify-transportpce-master/tests> pre-commit run --all-files --show-diff-on-failure 17:47:01 [WARNING] hook id `remove-tabs` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 17:47:01 [WARNING] hook id `perltidy` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 17:47:01 [INFO] Initializing environment for https://github.com/pre-commit/pre-commit-hooks. 17:47:01 [WARNING] repo `https://github.com/pre-commit/pre-commit-hooks` uses deprecated stage names (commit, push) which will be removed in a future version. Hint: often `pre-commit autoupdate --repo https://github.com/pre-commit/pre-commit-hooks` will fix this. if it does not -- consider reporting an issue to that repo. 17:47:01 [INFO] Initializing environment for https://github.com/jorisroovers/gitlint. 17:47:01 buildcontroller: freeze> python -m pip freeze --all 17:47:01 [INFO] Initializing environment for https://github.com/jorisroovers/gitlint:./gitlint-core[trusted-deps]. 17:47:02 buildcontroller: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 17:47:02 buildcontroller: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_controller.sh 17:47:02 + update-java-alternatives -l 17:47:02 java-1.11.0-openjdk-amd64 1111 /usr/lib/jvm/java-1.11.0-openjdk-amd64 17:47:02 java-1.12.0-openjdk-amd64 1211 /usr/lib/jvm/java-1.12.0-openjdk-amd64 17:47:02 java-1.17.0-openjdk-amd64 1711 /usr/lib/jvm/java-1.17.0-openjdk-amd64 17:47:02 java-1.21.0-openjdk-amd64 2111 /usr/lib/jvm/java-1.21.0-openjdk-amd64 17:47:02 java-1.8.0-openjdk-amd64 1081 /usr/lib/jvm/java-1.8.0-openjdk-amd64 17:47:02 + sudo update-java-alternatives -s java-1.21.0-openjdk-amd64 17:47:02 [INFO] Initializing environment for https://github.com/Lucas-C/pre-commit-hooks. 17:47:02 + java -version 17:47:02 + sed -n ;s/.* version "\(.*\)\.\(.*\)\..*".*$/\1/p; 17:47:02 [INFO] Initializing environment for https://github.com/pre-commit/mirrors-autopep8. 17:47:02 + JAVA_VER=21 17:47:02 + echo 21 17:47:02 21 17:47:02 + javac -version 17:47:02 + sed -n ;s/javac \(.*\)\.\(.*\)\..*.*$/\1/p; 17:47:02 [INFO] Initializing environment for https://github.com/perltidy/perltidy. 17:47:03 + JAVAC_VER=21 17:47:03 + echo 21 17:47:03 21 17:47:03 ok, java is 21 or newer 17:47:03 + [ 21 -ge 21 ] 17:47:03 + [ 21 -ge 21 ] 17:47:03 + echo ok, java is 21 or newer 17:47:03 + wget -nv https://dlcdn.apache.org/maven/maven-3/3.9.8/binaries/apache-maven-3.9.8-bin.tar.gz -P /tmp 17:47:03 2024-10-16 17:47:03 URL:https://dlcdn.apache.org/maven/maven-3/3.9.8/binaries/apache-maven-3.9.8-bin.tar.gz [9083702/9083702] -> "/tmp/apache-maven-3.9.8-bin.tar.gz" [1] 17:47:03 + sudo mkdir -p /opt 17:47:03 + sudo tar xf /tmp/apache-maven-3.9.8-bin.tar.gz -C /opt 17:47:03 + sudo ln -s /opt/apache-maven-3.9.8 /opt/maven 17:47:03 + sudo ln -s /opt/maven/bin/mvn /usr/bin/mvn 17:47:03 + mvn --version 17:47:03 Apache Maven 3.9.8 (36645f6c9b5079805ea5009217e36f2cffd34256) 17:47:03 Maven home: /opt/maven 17:47:03 Java version: 21.0.4, vendor: Ubuntu, runtime: /usr/lib/jvm/java-21-openjdk-amd64 17:47:03 Default locale: en, platform encoding: UTF-8 17:47:03 OS name: "linux", version: "5.4.0-190-generic", arch: "amd64", family: "unix" 17:47:03 [INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks. 17:47:03 [INFO] Once installed this environment will be reused. 17:47:03 [INFO] This may take a few minutes... 17:47:03 NOTE: Picked up JDK_JAVA_OPTIONS: 17:47:03 --add-opens=java.base/java.io=ALL-UNNAMED 17:47:03 --add-opens=java.base/java.lang=ALL-UNNAMED 17:47:03 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 17:47:03 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 17:47:03 --add-opens=java.base/java.net=ALL-UNNAMED 17:47:03 --add-opens=java.base/java.nio=ALL-UNNAMED 17:47:03 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 17:47:03 --add-opens=java.base/java.nio.file=ALL-UNNAMED 17:47:03 --add-opens=java.base/java.util=ALL-UNNAMED 17:47:03 --add-opens=java.base/java.util.jar=ALL-UNNAMED 17:47:03 --add-opens=java.base/java.util.stream=ALL-UNNAMED 17:47:03 --add-opens=java.base/java.util.zip=ALL-UNNAMED 17:47:03 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 17:47:03 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 17:47:03 -Xlog:disable 17:47:08 [INFO] Installing environment for https://github.com/Lucas-C/pre-commit-hooks. 17:47:08 [INFO] Once installed this environment will be reused. 17:47:08 [INFO] This may take a few minutes... 17:47:15 [INFO] Installing environment for https://github.com/pre-commit/mirrors-autopep8. 17:47:15 [INFO] Once installed this environment will be reused. 17:47:15 [INFO] This may take a few minutes... 17:47:18 [INFO] Installing environment for https://github.com/perltidy/perltidy. 17:47:18 [INFO] Once installed this environment will be reused. 17:47:18 [INFO] This may take a few minutes... 17:47:25 docs: freeze> python -m pip freeze --all 17:47:25 docs-linkcheck: freeze> python -m pip freeze --all 17:47:25 docs: alabaster==1.0.0,attrs==24.2.0,babel==2.16.0,blockdiag==3.0.0,certifi==2024.8.30,charset-normalizer==3.4.0,contourpy==1.3.0,cycler==0.12.1,docutils==0.21.2,fonttools==4.54.1,funcparserlib==2.0.0a0,future==1.0.0,idna==3.10,imagesize==1.4.1,Jinja2==3.1.4,jsonschema==3.2.0,kiwisolver==1.4.7,lfdocs-conf==0.9.0,MarkupSafe==3.0.1,matplotlib==3.9.2,numpy==2.1.2,nwdiag==3.0.0,packaging==24.1,pillow==11.0.0,pip==24.2,Pygments==2.18.0,pyparsing==3.2.0,pyrsistent==0.20.0,python-dateutil==2.9.0.post0,PyYAML==6.0.2,requests==2.32.3,requests-file==1.5.1,seqdiag==3.0.0,setuptools==75.1.0,six==1.16.0,snowballstemmer==2.2.0,Sphinx==8.1.3,sphinx-bootstrap-theme==0.8.1,sphinx-data-viewer==0.1.5,sphinx-rtd-theme==3.0.1,sphinx-tabs==3.4.7,sphinxcontrib-applehelp==2.0.0,sphinxcontrib-blockdiag==3.0.0,sphinxcontrib-devhelp==2.0.0,sphinxcontrib-htmlhelp==2.1.0,sphinxcontrib-jquery==4.1,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-needs==0.7.9,sphinxcontrib-nwdiag==2.0.0,sphinxcontrib-plantuml==0.30,sphinxcontrib-qthelp==2.0.0,sphinxcontrib-seqdiag==3.0.0,sphinxcontrib-serializinghtml==2.0.0,sphinxcontrib-swaggerdoc==0.1.7,urllib3==2.2.3,webcolors==24.8.0,wheel==0.44.0 17:47:25 docs: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sphinx-build -q -W --keep-going -b html -n -d /w/workspace/transportpce-tox-verify-transportpce-master/.tox/docs/tmp/doctrees ../docs/ /w/workspace/transportpce-tox-verify-transportpce-master/docs/_build/html 17:47:26 docs-linkcheck: alabaster==1.0.0,attrs==24.2.0,babel==2.16.0,blockdiag==3.0.0,certifi==2024.8.30,charset-normalizer==3.4.0,contourpy==1.3.0,cycler==0.12.1,docutils==0.21.2,fonttools==4.54.1,funcparserlib==2.0.0a0,future==1.0.0,idna==3.10,imagesize==1.4.1,Jinja2==3.1.4,jsonschema==3.2.0,kiwisolver==1.4.7,lfdocs-conf==0.9.0,MarkupSafe==3.0.1,matplotlib==3.9.2,numpy==2.1.2,nwdiag==3.0.0,packaging==24.1,pillow==11.0.0,pip==24.2,Pygments==2.18.0,pyparsing==3.2.0,pyrsistent==0.20.0,python-dateutil==2.9.0.post0,PyYAML==6.0.2,requests==2.32.3,requests-file==1.5.1,seqdiag==3.0.0,setuptools==75.1.0,six==1.16.0,snowballstemmer==2.2.0,Sphinx==8.1.3,sphinx-bootstrap-theme==0.8.1,sphinx-data-viewer==0.1.5,sphinx-rtd-theme==3.0.1,sphinx-tabs==3.4.7,sphinxcontrib-applehelp==2.0.0,sphinxcontrib-blockdiag==3.0.0,sphinxcontrib-devhelp==2.0.0,sphinxcontrib-htmlhelp==2.1.0,sphinxcontrib-jquery==4.1,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-needs==0.7.9,sphinxcontrib-nwdiag==2.0.0,sphinxcontrib-plantuml==0.30,sphinxcontrib-qthelp==2.0.0,sphinxcontrib-seqdiag==3.0.0,sphinxcontrib-serializinghtml==2.0.0,sphinxcontrib-swaggerdoc==0.1.7,urllib3==2.2.3,webcolors==24.8.0,wheel==0.44.0 17:47:26 docs-linkcheck: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sphinx-build -q -b linkcheck -d /w/workspace/transportpce-tox-verify-transportpce-master/.tox/docs-linkcheck/tmp/doctrees ../docs/ /w/workspace/transportpce-tox-verify-transportpce-master/docs/_build/linkcheck 17:47:28 docs: OK ✔ in 33.87 seconds 17:47:28 pylint: install_deps> python -I -m pip install 'pylint>=2.6.0' 17:47:30 trim trailing whitespace.................................................Passed 17:47:30 Tabs remover.............................................................Passed 17:47:30 autopep8.................................................................docs-linkcheck: OK ✔ in 35.36 seconds 17:47:33 pylint: freeze> python -m pip freeze --all 17:47:33 pylint: astroid==3.3.5,dill==0.3.9,isort==5.13.2,mccabe==0.7.0,pip==24.2,platformdirs==4.3.6,pylint==3.3.1,setuptools==75.1.0,tomlkit==0.13.2,wheel==0.44.0 17:47:33 pylint: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> find transportpce_tests/ -name '*.py' -exec pylint --fail-under=10 --max-line-length=120 --disable=missing-docstring,import-error --disable=fixme --disable=duplicate-code '--module-rgx=([a-z0-9_]+$)|([0-9.]{1,30}$)' '--method-rgx=(([a-z_][a-zA-Z0-9_]{2,})|(_[a-z0-9_]*)|(__[a-zA-Z][a-zA-Z0-9_]+__))$' '--variable-rgx=[a-zA-Z_][a-zA-Z0-9_]{1,30}$' '{}' + 17:47:35 Failed 17:47:35 - hook id: autopep8 17:47:35 - files were modified by this hook 17:47:35 perltidy.................................................................Passed 17:47:35 pre-commit hook(s) made changes. 17:47:35 If you are seeing this message in CI, reproduce locally with: `pre-commit run --all-files`. 17:47:35 To run `pre-commit` as part of git workflow, use `pre-commit install`. 17:47:35 All changes made by hooks: 17:47:35 diff --git a/tests/transportpce_tests/1.2.1/test02_topo_portmapping.py b/tests/transportpce_tests/1.2.1/test02_topo_portmapping.py 17:47:35 index b0403c0b..9773dd08 100644 17:47:35 --- a/tests/transportpce_tests/1.2.1/test02_topo_portmapping.py 17:47:35 +++ b/tests/transportpce_tests/1.2.1/test02_topo_portmapping.py 17:47:35 @@ -56,7 +56,7 @@ class TransportPCEtesting(unittest.TestCase): 17:47:35 for node in resTopo['network'][0]['node']: 17:47:35 nodeId = node['node-id'] 17:47:35 nodeMapId = nodeId.split("-")[0] 17:47:35 - if (nodeMapId == 'TAPI') : 17:47:35 + if (nodeMapId == 'TAPI'): 17:47:35 continue 17:47:35 response = test_utils.get_portmapping_node_attr(nodeMapId, "node-info", None) 17:47:35 self.assertEqual(response['status_code'], requests.codes.ok) 17:47:35 diff --git a/tests/transportpce_tests/1.2.1/test03_topology.py b/tests/transportpce_tests/1.2.1/test03_topology.py 17:47:35 index 4cac2581..df42f53e 100644 17:47:35 --- a/tests/transportpce_tests/1.2.1/test03_topology.py 17:47:35 +++ b/tests/transportpce_tests/1.2.1/test03_topology.py 17:47:35 @@ -165,7 +165,7 @@ class TransportPCETopologyTesting(unittest.TestCase): 17:47:35 for node in response['network'][0]['node']: 17:47:35 nodeId = node['node-id'] 17:47:35 nodeMapId = nodeId.split("-")[0] 17:47:35 - if (nodeMapId == 'TAPI') : 17:47:35 + if (nodeMapId == 'TAPI'): 17:47:35 continue 17:47:35 nodeType = node['org-openroadm-common-network:node-type'] 17:47:35 self.assertIn({'network-ref': 'openroadm-network', 'node-ref': 'ROADMA01'}, node['supporting-node']) 17:47:35 @@ -199,7 +199,7 @@ class TransportPCETopologyTesting(unittest.TestCase): 17:47:35 for node in response['network'][0]['node']: 17:47:35 nodeId = node['node-id'] 17:47:35 nodeMapId = nodeId.split("-")[0] 17:47:35 - if (nodeMapId == 'TAPI') : 17:47:35 + if (nodeMapId == 'TAPI'): 17:47:35 continue 17:47:35 self.assertEqual(node['supporting-node'][0]['network-ref'], 'clli-network') 17:47:35 self.assertEqual(node['supporting-node'][0]['node-ref'], 'NodeA') 17:47:35 @@ -221,7 +221,7 @@ class TransportPCETopologyTesting(unittest.TestCase): 17:47:35 for node in response['network'][0]['node']: 17:47:35 nodeId = node['node-id'] 17:47:35 nodeMapId = nodeId.split("-")[0] 17:47:35 - if (nodeMapId == 'TAPI') : 17:47:35 + if (nodeMapId == 'TAPI'): 17:47:35 continue 17:47:35 nodeType = node['org-openroadm-common-network:node-type'] 17:47:35 # Tests related to XPDRA nodes 17:47:35 @@ -361,7 +361,7 @@ class TransportPCETopologyTesting(unittest.TestCase): 17:47:35 self.assertEqual(node['supporting-node'][0]['network-ref'], 'clli-network') 17:47:35 nodeId = node['node-id'] 17:47:35 nodeMapId = nodeId.split("-")[0] 17:47:35 - if (nodeMapId == 'TAPI') : 17:47:35 + if (nodeMapId == 'TAPI'): 17:47:35 continue 17:47:35 self.assertIn(nodeId, CHECK_LIST) 17:47:35 self.assertEqual(node['supporting-node'][0]['node-ref'], 17:47:35 @@ -438,7 +438,7 @@ class TransportPCETopologyTesting(unittest.TestCase): 17:47:35 for node in response['network'][0]['node']: 17:47:35 nodeId = node['node-id'] 17:47:35 nodeMapId = nodeId.split("-")[0] 17:47:35 - if (nodeMapId == 'TAPI') : 17:47:35 + if (nodeMapId == 'TAPI'): 17:47:35 continue 17:47:35 nodeType = node['org-openroadm-common-network:node-type'] 17:47:35 # Tests related to XPDRA nodes 17:47:35 @@ -550,7 +550,7 @@ class TransportPCETopologyTesting(unittest.TestCase): 17:47:35 for node in response['network'][0]['node']: 17:47:35 nodeId = node['node-id'] 17:47:35 nodeMapId = nodeId.split("-")[0] 17:47:35 - if (nodeMapId == 'TAPI') : 17:47:35 + if (nodeMapId == 'TAPI'): 17:47:35 continue 17:47:35 self.assertIn(nodeId, listNode) 17:47:35 self.assertEqual(node['org-openroadm-clli-network:clli'], nodeId) 17:47:35 @@ -643,7 +643,7 @@ class TransportPCETopologyTesting(unittest.TestCase): 17:47:35 for node in response['network'][0]['node']: 17:47:35 nodeId = node['node-id'] 17:47:35 nodeMapId = nodeId.split("-")[0] 17:47:35 - if (nodeMapId == 'TAPI') : 17:47:35 + if (nodeMapId == 'TAPI'): 17:47:35 continue 17:47:35 nodeType = node['org-openroadm-common-network:node-type'] 17:47:35 # Tests related to XPDRA nodes 17:47:35 @@ -716,7 +716,7 @@ class TransportPCETopologyTesting(unittest.TestCase): 17:47:35 for node in response['network'][0]['node']: 17:47:35 nodeId = node['node-id'] 17:47:35 nodeMapId = nodeId.split("-")[0] 17:47:35 - if (nodeMapId == 'TAPI') : 17:47:35 + if (nodeMapId == 'TAPI'): 17:47:35 continue 17:47:35 self.assertIn({'network-ref': 'openroadm-network', 'node-ref': 'ROADMA01'}, node['supporting-node']) 17:47:35 nodeType = node['org-openroadm-common-network:node-type'] 17:47:35 diff --git a/tests/transportpce_tests/2.2.1/test02_topo_portmapping.py b/tests/transportpce_tests/2.2.1/test02_topo_portmapping.py 17:47:35 index 68699b00..e0a1ffaf 100644 17:47:35 --- a/tests/transportpce_tests/2.2.1/test02_topo_portmapping.py 17:47:35 +++ b/tests/transportpce_tests/2.2.1/test02_topo_portmapping.py 17:47:35 @@ -59,7 +59,7 @@ class TransportPCEtesting(unittest.TestCase): 17:47:35 # pylint: disable=consider-using-f-string 17:47:35 print("nodeId={}".format(nodeId)) 17:47:35 nodeMapId = nodeId.split("-")[0] + "-" + nodeId.split("-")[1] 17:47:35 - if (nodeMapId == 'TAPI-SBI') : 17:47:35 + if (nodeMapId == 'TAPI-SBI'): 17:47:35 continue 17:47:35 print("nodeMapId={}".format(nodeMapId)) 17:47:35 response = test_utils.get_portmapping_node_attr(nodeMapId, "node-info", None) 17:47:35 diff --git a/tests/transportpce_tests/2.2.1/test03_topology.py b/tests/transportpce_tests/2.2.1/test03_topology.py 17:47:35 index 97c9a1fb..6bbf7c3e 100644 17:47:35 --- a/tests/transportpce_tests/2.2.1/test03_topology.py 17:47:35 +++ b/tests/transportpce_tests/2.2.1/test03_topology.py 17:47:35 @@ -168,7 +168,7 @@ class TransportPCEtesting(unittest.TestCase): 17:47:35 for node in response['network'][0]['node']: 17:47:35 nodeId = node['node-id'] 17:47:35 nodeMapId = nodeId.split("-")[0] 17:47:35 - if (nodeMapId == 'TAPI') : 17:47:35 + if (nodeMapId == 'TAPI'): 17:47:35 continue 17:47:35 nodeType = node['org-openroadm-common-network:node-type'] 17:47:35 self.assertIn({'network-ref': 'openroadm-network', 'node-ref': 'ROADM-A1'}, node['supporting-node']) 17:47:35 @@ -202,7 +202,7 @@ class TransportPCEtesting(unittest.TestCase): 17:47:35 for node in response['network'][0]['node']: 17:47:35 nodeId = node['node-id'] 17:47:35 nodeMapId = nodeId.split("-")[0] 17:47:35 - if (nodeMapId == 'TAPI') : 17:47:35 + if (nodeMapId == 'TAPI'): 17:47:35 continue 17:47:35 self.assertEqual(node['supporting-node'][0]['network-ref'], 'clli-network') 17:47:35 self.assertEqual(node['supporting-node'][0]['node-ref'], 'NodeA') 17:47:35 @@ -224,7 +224,7 @@ class TransportPCEtesting(unittest.TestCase): 17:47:35 for node in response['network'][0]['node']: 17:47:35 nodeId = node['node-id'] 17:47:35 nodeMapId = nodeId.split("-")[0] 17:47:35 - if (nodeMapId == 'TAPI') : 17:47:35 + if (nodeMapId == 'TAPI'): 17:47:35 continue 17:47:35 nodeType = node['org-openroadm-common-network:node-type'] 17:47:35 # Tests related to XPDRA nodes 17:47:35 @@ -367,7 +367,7 @@ class TransportPCEtesting(unittest.TestCase): 17:47:35 self.assertEqual(node['supporting-node'][0]['network-ref'], 'clli-network') 17:47:35 nodeId = node['node-id'] 17:47:35 nodeMapId = nodeId.split("-")[0] 17:47:35 - if (nodeMapId == 'TAPI') : 17:47:35 + if (nodeMapId == 'TAPI'): 17:47:35 continue 17:47:35 if nodeId in CHECK_LIST: 17:47:35 self.assertEqual(node['supporting-node'][0]['node-ref'], CHECK_LIST[nodeId]['node-ref']) 17:47:35 @@ -446,7 +446,7 @@ class TransportPCEtesting(unittest.TestCase): 17:47:35 for node in response['network'][0]['node']: 17:47:35 nodeId = node['node-id'] 17:47:35 nodeMapId = nodeId.split("-")[0] 17:47:35 - if (nodeMapId == 'TAPI') : 17:47:35 + if (nodeMapId == 'TAPI'): 17:47:35 continue 17:47:35 nodeType = node['org-openroadm-common-network:node-type'] 17:47:35 if nodeId == 'XPDR-A1-XPDR1': 17:47:35 @@ -563,7 +563,7 @@ class TransportPCEtesting(unittest.TestCase): 17:47:35 for node in response['network'][0]['node']: 17:47:35 nodeId = node['node-id'] 17:47:35 nodeMapId = nodeId.split("-")[0] 17:47:35 - if (nodeMapId == 'TAPI') : 17:47:35 + if (nodeMapId == 'TAPI'): 17:47:35 continue 17:47:35 self.assertIn(nodeId, listNode) 17:47:35 self.assertEqual(node['org-openroadm-clli-network:clli'], nodeId) 17:47:35 @@ -657,7 +657,7 @@ class TransportPCEtesting(unittest.TestCase): 17:47:35 for node in response['network'][0]['node']: 17:47:35 nodeId = node['node-id'] 17:47:35 nodeMapId = nodeId.split("-")[0] 17:47:35 - if (nodeMapId == 'TAPI') : 17:47:35 + if (nodeMapId == 'TAPI'): 17:47:35 continue 17:47:35 nodeType = node['org-openroadm-common-network:node-type'] 17:47:35 # Tests related to XPDRA nodes 17:47:35 @@ -734,7 +734,7 @@ class TransportPCEtesting(unittest.TestCase): 17:47:35 for node in response['network'][0]['node']: 17:47:35 nodeId = node['node-id'] 17:47:35 nodeMapId = nodeId.split("-")[0] 17:47:35 - if (nodeMapId == 'TAPI') : 17:47:35 + if (nodeMapId == 'TAPI'): 17:47:35 continue 17:47:35 self.assertIn({'network-ref': 'openroadm-network', 'node-ref': 'ROADM-A1'}, node['supporting-node']) 17:47:35 nodeType = node['org-openroadm-common-network:node-type'] 17:47:35 diff --git a/tests/transportpce_tests/2.2.1/test04_otn_topology.py b/tests/transportpce_tests/2.2.1/test04_otn_topology.py 17:47:35 index f1d1ec77..3a34bf5f 100644 17:47:35 --- a/tests/transportpce_tests/2.2.1/test04_otn_topology.py 17:47:35 +++ b/tests/transportpce_tests/2.2.1/test04_otn_topology.py 17:47:35 @@ -86,7 +86,7 @@ class TransportPCEtesting(unittest.TestCase): 17:47:35 for node in response['network'][0]['node']: 17:47:35 nodeId = node['node-id'] 17:47:35 nodeMapId = nodeId.split("-")[0] 17:47:35 - if (nodeMapId == 'TAPI') : 17:47:35 + if (nodeMapId == 'TAPI'): 17:47:35 continue 17:47:35 self.assertIn({'network-ref': 'openroadm-network', 'node-ref': 'SPDR-SA1'}, node['supporting-node']) 17:47:35 self.assertIn({'network-ref': 'clli-network', 'node-ref': 'NodeSA'}, node['supporting-node']) 17:47:35 @@ -150,7 +150,7 @@ class TransportPCEtesting(unittest.TestCase): 17:47:35 for node in response['network'][0]['node']: 17:47:35 nodeId = node['node-id'] 17:47:35 nodeMapId = nodeId.split("-")[0] 17:47:35 - if (nodeMapId == 'TAPI') : 17:47:35 + if (nodeMapId == 'TAPI'): 17:47:35 continue 17:47:35 if nodeId in CHECK_LIST: 17:47:35 self.assertEqual(node['org-openroadm-common-network:node-type'], CHECK_LIST[nodeId]['node-type']) 17:47:35 diff --git a/tests/transportpce_tests/hybrid/test01_device_change_notifications.py b/tests/transportpce_tests/hybrid/test01_device_change_notifications.py 17:47:35 index ed8593da..96865e50 100644 17:47:35 --- a/tests/transportpce_tests/hybrid/test01_device_change_notifications.py 17:47:35 +++ b/tests/transportpce_tests/hybrid/test01_device_change_notifications.py 17:47:35 @@ -235,7 +235,7 @@ class TransportPCEFulltesting(unittest.TestCase): 17:47:35 self.assertEqual(node['org-openroadm-common-network:administrative-state'], 'inService') 17:47:35 nodeId = node['node-id'] 17:47:35 nodeMapId = nodeId.split("-")[0] 17:47:35 - if (nodeMapId == 'TAPI') : 17:47:35 + if (nodeMapId == 'TAPI'): 17:47:35 continue 17:47:35 tp_list = node['ietf-network-topology:termination-point'] 17:47:35 for tp in tp_list: 17:47:35 @@ -301,7 +301,7 @@ class TransportPCEFulltesting(unittest.TestCase): 17:47:35 self.assertEqual(node['org-openroadm-common-network:operational-state'], 'inService') 17:47:35 self.assertEqual(node['org-openroadm-common-network:administrative-state'], 'inService') 17:47:35 nodeMapId = node['node-id'].split("-")[0] 17:47:35 - if (nodeMapId == 'TAPI') : 17:47:35 + if (nodeMapId == 'TAPI'): 17:47:35 continue 17:47:35 tp_list = node['ietf-network-topology:termination-point'] 17:47:35 for tp in tp_list: 17:47:35 @@ -356,7 +356,7 @@ class TransportPCEFulltesting(unittest.TestCase): 17:47:35 self.assertEqual(node['org-openroadm-common-network:administrative-state'], 'inService') 17:47:35 nodeId = node['node-id'] 17:47:35 nodeMapId = nodeId.split("-")[0] 17:47:35 - if (nodeMapId == 'TAPI') : 17:47:35 + if (nodeMapId == 'TAPI'): 17:47:35 continue 17:47:35 tp_list = node['ietf-network-topology:termination-point'] 17:47:35 for tp in tp_list: 17:47:35 @@ -444,7 +444,7 @@ class TransportPCEFulltesting(unittest.TestCase): 17:47:35 self.assertEqual(node['org-openroadm-common-network:administrative-state'], 'inService') 17:47:35 nodeId = node['node-id'] 17:47:35 nodeMapId = nodeId.split("-")[0] 17:47:35 - if (nodeMapId == 'TAPI') : 17:47:35 + if (nodeMapId == 'TAPI'): 17:47:35 continue 17:47:35 tp_list = node['ietf-network-topology:termination-point'] 17:47:35 for tp in tp_list: 17:47:35 @@ -530,7 +530,7 @@ class TransportPCEFulltesting(unittest.TestCase): 17:47:35 self.assertEqual(node['org-openroadm-common-network:administrative-state'], 'inService') 17:47:35 nodeId = node['node-id'] 17:47:35 nodeMapId = nodeId.split("-")[0] 17:47:35 - if (nodeMapId == 'TAPI') : 17:47:35 + if (nodeMapId == 'TAPI'): 17:47:35 continue 17:47:35 tp_list = node['ietf-network-topology:termination-point'] 17:47:35 for tp in tp_list: 17:47:35 @@ -616,7 +616,7 @@ class TransportPCEFulltesting(unittest.TestCase): 17:47:35 self.assertEqual(node['org-openroadm-common-network:administrative-state'], 'inService') 17:47:35 nodeId = node['node-id'] 17:47:35 nodeMapId = nodeId.split("-")[0] 17:47:35 - if (nodeMapId == 'TAPI') : 17:47:35 + if (nodeMapId == 'TAPI'): 17:47:35 continue 17:47:35 tp_list = node['ietf-network-topology:termination-point'] 17:47:35 for tp in tp_list: 17:47:35 diff --git a/tests/transportpce_tests/network/test01_topo_extension.py b/tests/transportpce_tests/network/test01_topo_extension.py 17:47:35 index 3b926d11..2141085d 100644 17:47:35 --- a/tests/transportpce_tests/network/test01_topo_extension.py 17:47:35 +++ b/tests/transportpce_tests/network/test01_topo_extension.py 17:47:35 @@ -198,7 +198,7 @@ class TransportPCEtesting(unittest.TestCase): 17:47:35 response = test_utils.transportpce_api_rpc_request( 17:47:35 'transportpce-networkutils', 'init-xpdr-rdm-links', 17:47:35 {'links-input': {'xpdr-node': 'SPDR-SA1', 'xpdr-num': '1', 'network-num': '1', 17:47:35 - 'rdm-node': 'ROADM-TA1', 'termination-point-num' : 'SRG1-PP1-TXRX', 17:47:35 + 'rdm-node': 'ROADM-TA1', 'termination-point-num': 'SRG1-PP1-TXRX', 17:47:35 'rdm-topology-uuid': 'a21e4756-4d70-3d40-95b6-f7f630b4a13b', 17:47:35 'rdm-nep-uuid': '3c3c3679-ccd7-3343-9f36-bdb7bea11a84', 17:47:35 'rdm-node-uuid': 'f929e2dc-3c08-32c3-985f-c126023efc43'}}) 17:47:35 @@ -210,7 +210,7 @@ class TransportPCEtesting(unittest.TestCase): 17:47:35 response = test_utils.transportpce_api_rpc_request( 17:47:35 'transportpce-networkutils', 'init-rdm-xpdr-links', 17:47:35 {'links-input': {'xpdr-node': 'SPDR-SA1', 'xpdr-num': '1', 'network-num': '1', 17:47:35 - 'rdm-node': 'ROADM-TA1', 'termination-point-num' : 'SRG1-PP1-TXRX', 17:47:35 + 'rdm-node': 'ROADM-TA1', 'termination-point-num': 'SRG1-PP1-TXRX', 17:47:35 'rdm-topology-uuid': 'a21e4756-4d70-3d40-95b6-f7f630b4a13b', 17:47:35 'rdm-nep-uuid': '3c3c3679-ccd7-3343-9f36-bdb7bea11a84', 17:47:35 'rdm-node-uuid': 'f929e2dc-3c08-32c3-985f-c126023efc43'}}) 17:47:35 @@ -222,7 +222,7 @@ class TransportPCEtesting(unittest.TestCase): 17:47:35 response = test_utils.transportpce_api_rpc_request( 17:47:35 'transportpce-networkutils', 'init-xpdr-rdm-links', 17:47:35 {'links-input': {'xpdr-node': 'SPDR-SC1', 'xpdr-num': '1', 'network-num': '1', 17:47:35 - 'rdm-node': 'ROADM-TC1', 'termination-point-num' : 'SRG1-PP1-TXRX', 17:47:35 + 'rdm-node': 'ROADM-TC1', 'termination-point-num': 'SRG1-PP1-TXRX', 17:47:35 'rdm-topology-uuid': 'a21e4756-4d70-3d40-95b6-f7f630b4a13b', 17:47:35 'rdm-nep-uuid': 'e5a9d17d-40cd-3733-b736-cc787a876195', 17:47:35 'rdm-node-uuid': '7a44ea23-90d1-357d-8754-6e88d404b670'}}) 17:47:35 @@ -234,7 +234,7 @@ class TransportPCEtesting(unittest.TestCase): 17:47:35 response = test_utils.transportpce_api_rpc_request( 17:47:35 'transportpce-networkutils', 'init-rdm-xpdr-links', 17:47:35 {'links-input': {'xpdr-node': 'SPDR-SC1', 'xpdr-num': '1', 'network-num': '1', 17:47:35 - 'rdm-node': 'ROADM-TC1', 'termination-point-num' : 'SRG1-PP1-TXRX', 17:47:35 + 'rdm-node': 'ROADM-TC1', 'termination-point-num': 'SRG1-PP1-TXRX', 17:47:35 'rdm-topology-uuid': 'a21e4756-4d70-3d40-95b6-f7f630b4a13b', 17:47:35 'rdm-nep-uuid': 'e5a9d17d-40cd-3733-b736-cc787a876195', 17:47:35 'rdm-node-uuid': '7a44ea23-90d1-357d-8754-6e88d404b670'}}) 17:47:35 @@ -262,9 +262,9 @@ class TransportPCEtesting(unittest.TestCase): 17:47:35 'transportpce-networkutils', 'init-inter-domain-links', 17:47:35 {'a-end': {'rdm-node': 'ROADM-A1', 'deg-num': '1', 'termination-point': 'DEG1-TTP-TXRX'}, 17:47:35 'z-end': {'rdm-node': 'ROADM-TA1', 'deg-num': '2', 'termination-point': 'DEG2-TTP-TXRX', 17:47:35 - 'rdm-topology-uuid': 'a21e4756-4d70-3d40-95b6-f7f630b4a13b', 17:47:35 - 'rdm-nep-uuid': 'd42ed13c-d81f-3136-a7d8-b283681031d4', 17:47:35 - 'rdm-node-uuid': 'f929e2dc-3c08-32c3-985f-c126023efc43'}}) 17:47:35 + 'rdm-topology-uuid': 'a21e4756-4d70-3d40-95b6-f7f630b4a13b', 17:47:35 + 'rdm-nep-uuid': 'd42ed13c-d81f-3136-a7d8-b283681031d4', 17:47:35 + 'rdm-node-uuid': 'f929e2dc-3c08-32c3-985f-c126023efc43'}}) 17:47:35 self.assertEqual(response['status_code'], requests.codes.ok) 17:47:35 print(response['output']['result']) 17:47:35 time.sleep(2) 17:47:35 @@ -274,9 +274,9 @@ class TransportPCEtesting(unittest.TestCase): 17:47:35 'transportpce-networkutils', 'init-inter-domain-links', 17:47:35 {'a-end': {'rdm-node': 'ROADM-C1', 'deg-num': '2', 'termination-point': 'DEG2-TTP-TXRX'}, 17:47:35 'z-end': {'rdm-node': 'ROADM-TC1', 'deg-num': '1', 'termination-point': 'DEG1-TTP-TXRX', 17:47:35 - 'rdm-topology-uuid': 'a21e4756-4d70-3d40-95b6-f7f630b4a13b', 17:47:35 - 'rdm-nep-uuid': 'fb3a00c1-342f-3cdc-b83d-2c257de298c1', 17:47:35 - 'rdm-node-uuid': '7a44ea23-90d1-357d-8754-6e88d404b670'}}) 17:47:35 + 'rdm-topology-uuid': 'a21e4756-4d70-3d40-95b6-f7f630b4a13b', 17:47:35 + 'rdm-nep-uuid': 'fb3a00c1-342f-3cdc-b83d-2c257de298c1', 17:47:35 + 'rdm-node-uuid': '7a44ea23-90d1-357d-8754-6e88d404b670'}}) 17:47:35 self.assertEqual(response['status_code'], requests.codes.ok) 17:47:35 print(response['output']['result']) 17:47:35 response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 17:47:35 @@ -304,15 +304,15 @@ class TransportPCEtesting(unittest.TestCase): 17:47:35 linkType = link['org-openroadm-common-network:link-type'] 17:47:35 if 'transportpce-or-network-augmentation:link-class' in link.keys(): 17:47:35 linkClass = link['transportpce-or-network-augmentation:link-class'] 17:47:35 - if (linkType == 'ROADM-TO-ROADM' and linkClass == 'inter-domain') : 17:47:35 + if (linkType == 'ROADM-TO-ROADM' and linkClass == 'inter-domain'): 17:47:35 find = linkId in check_list 17:47:35 self.assertEqual(find, True) 17:47:35 interDomainLinkNber += 1 17:47:35 - if (linkType == 'XPONDER-OUTPUT' and linkClass == 'alien-to-tapi') : 17:47:35 + if (linkType == 'XPONDER-OUTPUT' and linkClass == 'alien-to-tapi'): 17:47:35 find = linkId in check_list 17:47:35 self.assertEqual(find, True) 17:47:35 alienToTapiLinkNber += 1 17:47:35 - if (linkType == 'XPONDER-INPUT' and linkClass == 'alien-to-tapi') : 17:47:35 + if (linkType == 'XPONDER-INPUT' and linkClass == 'alien-to-tapi'): 17:47:35 find = linkId in check_list 17:47:35 self.assertEqual(find, True) 17:47:35 alienToTapiLinkNber += 1 17:47:35 pre-commit: exit 1 (35.02 seconds) /w/workspace/transportpce-tox-verify-transportpce-master/tests> pre-commit run --all-files --show-diff-on-failure pid=29148 17:47:52 ************* Module 1.2.1.test03_topology 17:47:52 transportpce_tests/1.2.1/test03_topology.py:168:0: C0325: Unnecessary parens after 'if' keyword (superfluous-parens) 17:47:52 transportpce_tests/1.2.1/test03_topology.py:202:0: C0325: Unnecessary parens after 'if' keyword (superfluous-parens) 17:47:52 transportpce_tests/1.2.1/test03_topology.py:224:0: C0325: Unnecessary parens after 'if' keyword (superfluous-parens) 17:47:52 transportpce_tests/1.2.1/test03_topology.py:364:0: C0325: Unnecessary parens after 'if' keyword (superfluous-parens) 17:47:52 transportpce_tests/1.2.1/test03_topology.py:441:0: C0325: Unnecessary parens after 'if' keyword (superfluous-parens) 17:47:52 transportpce_tests/1.2.1/test03_topology.py:553:0: C0325: Unnecessary parens after 'if' keyword (superfluous-parens) 17:47:52 transportpce_tests/1.2.1/test03_topology.py:646:0: C0325: Unnecessary parens after 'if' keyword (superfluous-parens) 17:47:52 transportpce_tests/1.2.1/test03_topology.py:719:0: C0325: Unnecessary parens after 'if' keyword (superfluous-parens) 17:47:52 transportpce_tests/1.2.1/test03_topology.py:430:4: R0912: Too many branches (13/12) (too-many-branches) 17:47:52 ************* Module 1.2.1.test02_topo_portmapping 17:47:52 transportpce_tests/1.2.1/test02_topo_portmapping.py:59:0: C0325: Unnecessary parens after 'if' keyword (superfluous-parens) 17:47:52 ************* Module hybrid.test01_device_change_notifications 17:47:52 transportpce_tests/hybrid/test01_device_change_notifications.py:238:0: C0325: Unnecessary parens after 'if' keyword (superfluous-parens) 17:47:52 transportpce_tests/hybrid/test01_device_change_notifications.py:304:0: C0325: Unnecessary parens after 'if' keyword (superfluous-parens) 17:47:52 transportpce_tests/hybrid/test01_device_change_notifications.py:359:0: C0325: Unnecessary parens after 'if' keyword (superfluous-parens) 17:47:52 transportpce_tests/hybrid/test01_device_change_notifications.py:447:0: C0325: Unnecessary parens after 'if' keyword (superfluous-parens) 17:47:52 transportpce_tests/hybrid/test01_device_change_notifications.py:533:0: C0325: Unnecessary parens after 'if' keyword (superfluous-parens) 17:47:53 transportpce_tests/hybrid/test01_device_change_notifications.py:619:0: C0325: Unnecessary parens after 'if' keyword (superfluous-parens) 17:47:53 ************* Module 2.2.1.test03_topology 17:47:53 transportpce_tests/2.2.1/test03_topology.py:171:0: C0325: Unnecessary parens after 'if' keyword (superfluous-parens) 17:47:53 transportpce_tests/2.2.1/test03_topology.py:205:0: C0325: Unnecessary parens after 'if' keyword (superfluous-parens) 17:47:53 transportpce_tests/2.2.1/test03_topology.py:227:0: C0325: Unnecessary parens after 'if' keyword (superfluous-parens) 17:47:53 transportpce_tests/2.2.1/test03_topology.py:370:0: C0325: Unnecessary parens after 'if' keyword (superfluous-parens) 17:47:53 transportpce_tests/2.2.1/test03_topology.py:449:0: C0325: Unnecessary parens after 'if' keyword (superfluous-parens) 17:47:53 transportpce_tests/2.2.1/test03_topology.py:566:0: C0325: Unnecessary parens after 'if' keyword (superfluous-parens) 17:47:53 transportpce_tests/2.2.1/test03_topology.py:660:0: C0325: Unnecessary parens after 'if' keyword (superfluous-parens) 17:47:53 transportpce_tests/2.2.1/test03_topology.py:737:0: C0325: Unnecessary parens after 'if' keyword (superfluous-parens) 17:47:53 ************* Module 2.2.1.test04_otn_topology 17:47:53 transportpce_tests/2.2.1/test04_otn_topology.py:89:0: C0325: Unnecessary parens after 'if' keyword (superfluous-parens) 17:47:53 transportpce_tests/2.2.1/test04_otn_topology.py:153:0: C0325: Unnecessary parens after 'if' keyword (superfluous-parens) 17:47:53 transportpce_tests/2.2.1/test04_otn_topology.py:120:4: R0912: Too many branches (13/12) (too-many-branches) 17:47:53 ************* Module 2.2.1.test02_topo_portmapping 17:47:53 transportpce_tests/2.2.1/test02_topo_portmapping.py:62:0: C0325: Unnecessary parens after 'if' keyword (superfluous-parens) 17:47:53 17:47:53 ----------------------------------- 17:47:53 Your code has been rated at 9.97/10 17:47:53 17:47:54 pre-commit: FAIL ✖ in 38.4 seconds 17:47:54 pylint: exit 1 (20.90 seconds) /w/workspace/transportpce-tox-verify-transportpce-master/tests> find transportpce_tests/ -name '*.py' -exec pylint --fail-under=10 --max-line-length=120 --disable=missing-docstring,import-error --disable=fixme --disable=duplicate-code '--module-rgx=([a-z0-9_]+$)|([0-9.]{1,30}$)' '--method-rgx=(([a-z_][a-zA-Z0-9_]{2,})|(_[a-z0-9_]*)|(__[a-zA-Z][a-zA-Z0-9_]+__))$' '--variable-rgx=[a-zA-Z_][a-zA-Z0-9_]{1,30}$' '{}' + pid=30062 17:48:39 pylint: FAIL ✖ in 26.46 seconds 17:48:39 buildcontroller: OK ✔ in 1 minute 44.28 seconds 17:48:39 build_karaf_tests121: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 17:48:39 testsPCE: install_deps> python -I -m pip install gnpy4tpce==2.4.7 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 17:48:39 build_karaf_tests221: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 17:48:39 sims: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 17:48:45 sims: freeze> python -m pip freeze --all 17:48:45 build_karaf_tests221: freeze> python -m pip freeze --all 17:48:45 build_karaf_tests121: freeze> python -m pip freeze --all 17:48:45 sims: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 17:48:45 sims: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./install_lightynode.sh 17:48:45 Using lighynode version 20.1.0.2 17:48:45 Installing lightynode device to ./lightynode/lightynode-openroadm-device directory 17:48:45 build_karaf_tests121: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 17:48:45 build_karaf_tests121: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 17:48:45 build_karaf_tests221: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 17:48:45 build_karaf_tests221: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 17:48:45 NOTE: Picked up JDK_JAVA_OPTIONS: 17:48:45 --add-opens=java.base/java.io=ALL-UNNAMED 17:48:45 --add-opens=java.base/java.lang=ALL-UNNAMED 17:48:45 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 17:48:45 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 17:48:45 --add-opens=java.base/java.net=ALL-UNNAMED 17:48:45 --add-opens=java.base/java.nio=ALL-UNNAMED 17:48:45 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 17:48:45 --add-opens=java.base/java.nio.file=ALL-UNNAMED 17:48:45 --add-opens=java.base/java.util=ALL-UNNAMED 17:48:45 --add-opens=java.base/java.util.jar=ALL-UNNAMED 17:48:45 --add-opens=java.base/java.util.stream=ALL-UNNAMED 17:48:45 --add-opens=java.base/java.util.zip=ALL-UNNAMED 17:48:45 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 17:48:45 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 17:48:45 -Xlog:disable 17:48:45 NOTE: Picked up JDK_JAVA_OPTIONS: 17:48:45 --add-opens=java.base/java.io=ALL-UNNAMED 17:48:45 --add-opens=java.base/java.lang=ALL-UNNAMED 17:48:45 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 17:48:45 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 17:48:45 --add-opens=java.base/java.net=ALL-UNNAMED 17:48:45 --add-opens=java.base/java.nio=ALL-UNNAMED 17:48:45 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 17:48:45 --add-opens=java.base/java.nio.file=ALL-UNNAMED 17:48:45 --add-opens=java.base/java.util=ALL-UNNAMED 17:48:45 --add-opens=java.base/java.util.jar=ALL-UNNAMED 17:48:45 --add-opens=java.base/java.util.stream=ALL-UNNAMED 17:48:45 --add-opens=java.base/java.util.zip=ALL-UNNAMED 17:48:45 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 17:48:45 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 17:48:45 -Xlog:disable 17:48:48 sims: OK ✔ in 9.22 seconds 17:48:48 build_karaf_tests71: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 17:49:00 build_karaf_tests71: freeze> python -m pip freeze --all 17:49:01 build_karaf_tests71: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 17:49:01 build_karaf_tests71: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 17:49:01 NOTE: Picked up JDK_JAVA_OPTIONS: 17:49:01 --add-opens=java.base/java.io=ALL-UNNAMED 17:49:01 --add-opens=java.base/java.lang=ALL-UNNAMED 17:49:01 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 17:49:01 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 17:49:01 --add-opens=java.base/java.net=ALL-UNNAMED 17:49:01 --add-opens=java.base/java.nio=ALL-UNNAMED 17:49:01 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 17:49:01 --add-opens=java.base/java.nio.file=ALL-UNNAMED 17:49:01 --add-opens=java.base/java.util=ALL-UNNAMED 17:49:01 --add-opens=java.base/java.util.jar=ALL-UNNAMED 17:49:01 --add-opens=java.base/java.util.stream=ALL-UNNAMED 17:49:01 --add-opens=java.base/java.util.zip=ALL-UNNAMED 17:49:01 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 17:49:01 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 17:49:01 -Xlog:disable 17:49:32 build_karaf_tests221: OK ✔ in 53.21 seconds 17:49:32 build_karaf_tests_hybrid: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 17:49:33 build_karaf_tests121: OK ✔ in 54.72 seconds 17:49:33 tests_tapi: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 17:49:39 tests_tapi: freeze> python -m pip freeze --all 17:49:39 build_karaf_tests_hybrid: freeze> python -m pip freeze --all 17:49:39 tests_tapi: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 17:49:39 tests_tapi: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh tapi 17:49:39 using environment variables from ./karaf221.env 17:49:39 pytest -q transportpce_tests/tapi/test01_abstracted_topology.py 17:49:39 build_karaf_tests_hybrid: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 17:49:39 build_karaf_tests_hybrid: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 17:49:39 NOTE: Picked up JDK_JAVA_OPTIONS: 17:49:39 --add-opens=java.base/java.io=ALL-UNNAMED 17:49:39 --add-opens=java.base/java.lang=ALL-UNNAMED 17:49:39 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 17:49:39 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 17:49:39 --add-opens=java.base/java.net=ALL-UNNAMED 17:49:39 --add-opens=java.base/java.nio=ALL-UNNAMED 17:49:39 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 17:49:39 --add-opens=java.base/java.nio.file=ALL-UNNAMED 17:49:39 --add-opens=java.base/java.util=ALL-UNNAMED 17:49:39 --add-opens=java.base/java.util.jar=ALL-UNNAMED 17:49:39 --add-opens=java.base/java.util.stream=ALL-UNNAMED 17:49:39 --add-opens=java.base/java.util.zip=ALL-UNNAMED 17:49:39 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 17:49:39 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 17:49:39 -Xlog:disable 17:49:44 build_karaf_tests71: OK ✔ in 53.85 seconds 17:49:44 testsPCE: freeze> python -m pip freeze --all 17:49:45 testsPCE: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,click==8.1.7,contourpy==1.3.0,cryptography==3.3.2,cycler==0.12.1,dict2xml==1.7.6,Flask==2.1.3,Flask-Injector==0.14.0,fonttools==4.54.1,gnpy4tpce==2.4.7,idna==3.10,iniconfig==2.0.0,injector==0.22.0,itsdangerous==2.2.0,Jinja2==3.1.4,kiwisolver==1.4.7,lxml==5.3.0,MarkupSafe==3.0.1,matplotlib==3.9.2,netconf-client==3.1.1,networkx==2.8.8,numpy==1.26.4,packaging==24.1,pandas==1.5.3,paramiko==3.5.0,pbr==5.11.1,pillow==11.0.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pyparsing==3.2.0,pytest==8.3.3,python-dateutil==2.9.0.post0,pytz==2024.2,requests==2.32.3,scipy==1.14.1,setuptools==50.3.2,six==1.16.0,urllib3==2.2.3,Werkzeug==2.0.3,wheel==0.44.0,xlrd==1.2.0 17:49:45 testsPCE: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh pce 17:49:45 pytest -q transportpce_tests/pce/test01_pce.py 17:50:50 ........................................... [100%] 17:51:54 20 passed in 128.41s (0:02:08) 17:51:54 pytest -q transportpce_tests/pce/test02_pce_400G.py 17:51:55 .................. [100%] 17:52:35 9 passed in 41.29s 17:52:35 pytest -q transportpce_tests/pce/test03_gnpy.py 17:52:41 .............. [100%] 17:53:13 8 passed in 37.43s 17:53:13 pytest -q transportpce_tests/pce/test04_pce_bug_fix.py 17:53:26 ........... [100%] 17:53:49 3 passed in 35.64s 17:53:49 build_karaf_tests_hybrid: OK ✔ in 53.75 seconds 17:53:49 testsPCE: OK ✔ in 5 minutes 10.55 seconds 17:53:49 tests121: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 17:53:56 tests121: freeze> python -m pip freeze --all 17:53:56 tests121: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 17:53:56 tests121: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 1.2.1 17:53:56 using environment variables from ./karaf121.env 17:53:56 pytest -q transportpce_tests/1.2.1/test01_portmapping.py 17:56:26 .... [100%] 17:56:30 50 passed in 411.07s (0:06:51) 17:56:30 pytest -q transportpce_tests/tapi/test02_full_topology.py 17:57:27 ............................F..F. [100%] 17:58:17 21 passed in 261.08s (0:04:21) 17:58:17 pytest -q transportpce_tests/1.2.1/test02_topo_portmapping.py 17:58:18 .................. [100%] 18:01:01 =================================== FAILURES =================================== 18:01:01 ________________ TransportPCEtesting.test_11_check_otn_topology ________________ 18:01:01 18:01:01 self = 18:01:01 18:01:01 def test_11_check_otn_topology(self): 18:01:01 response = test_utils.get_ietf_network_request('otn-topology', 'config') 18:01:01 self.assertEqual(response['status_code'], requests.codes.ok) 18:01:01 > self.assertEqual(len(response['network'][0]['node']), 6, 'There should be 6 otn nodes') 18:01:01 E AssertionError: 7 != 6 : There should be 6 otn nodes 18:01:01 18:01:01 transportpce_tests/tapi/test02_full_topology.py:266: AssertionError 18:01:01 _____________ TransportPCEtesting.test_12_check_openroadm_topology _____________ 18:01:01 18:01:01 self = 18:01:01 18:01:01 def test_12_check_openroadm_topology(self): 18:01:01 response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 18:01:01 self.assertEqual(response['status_code'], requests.codes.ok) 18:01:01 > self.assertEqual(len(response['network'][0]['node']), 13, 'There should be 13 openroadm nodes') 18:01:01 E AssertionError: 14 != 13 : There should be 13 openroadm nodes 18:01:01 18:01:01 transportpce_tests/tapi/test02_full_topology.py:272: AssertionError 18:01:01 =========================== short test summary info ============================ 18:01:01 FAILED transportpce_tests/tapi/test02_full_topology.py::TransportPCEtesting::test_11_check_otn_topology 18:01:01 FAILED transportpce_tests/tapi/test02_full_topology.py::TransportPCEtesting::test_12_check_openroadm_topology 18:01:01 2 failed, 28 passed in 270.79s (0:04:30) 18:01:01 tests_tapi: exit 1 (682.29 seconds) /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh tapi pid=30583 18:01:02 tests_tapi: FAIL ✖ in 11 minutes 28.45 seconds 18:01:02 tests71: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 18:01:07 tests71: freeze> python -m pip freeze --all 18:01:07 tests71: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 18:01:07 tests71: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 7.1 18:01:07 using environment variables from ./karaf71.env 18:01:07 pytest -q transportpce_tests/7.1/test01_portmapping.py 18:01:36 ............. [100%] 18:01:49 12 passed in 42.18s 18:01:49 pytest -q transportpce_tests/7.1/test02_otn_renderer.py 18:01:50 ..... [100%] 18:02:02 6 passed in 224.42s (0:03:44) 18:02:02 pytest -q transportpce_tests/1.2.1/test03_topology.py 18:02:21 .............................................................. [100%] 18:04:31 62 passed in 161.27s (0:02:41) 18:04:31 pytest -q transportpce_tests/7.1/test03_renderer_or_modes.py 18:05:02 .....................F.F.F.F......................... [100%] 18:06:45 48 passed in 134.23s (0:02:14) 18:06:45 pytest -q transportpce_tests/7.1/test04_renderer_regen_mode.py 18:07:10 ...................... [100%] 18:07:57 22 passed in 71.92s (0:01:11) 18:07:58 tests71: OK ✔ in 6 minutes 56.2 seconds 18:07:58 tests_network: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 18:07:58 tests221: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 18:08:04 tests_network: freeze> python -m pip freeze --all 18:08:04 tests221: freeze> python -m pip freeze --all 18:08:04 tests_network: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 18:08:04 tests_network: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh network 18:08:04 using environment variables from ./karaf221.env 18:08:04 pytest -q transportpce_tests/network/test01_topo_extension.py 18:08:04 tests221: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 18:08:04 tests221: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 2.2.1 18:08:04 using environment variables from ./karaf221.env 18:08:04 pytest -q transportpce_tests/2.2.1/test01_portmapping.py 18:08:52 .........F.F..F..F..F..................... [100%] 18:09:31 35 passed in 87.08s (0:01:27) 18:09:32 pytest -q transportpce_tests/2.2.1/test02_topo_portmapping.py 18:10:02 ...... [100%] 18:10:15 6 passed in 43.56s 18:10:15 pytest -q transportpce_tests/2.2.1/test03_topology.py 18:10:26 EEEEEEEEEEEEEEEEEE [100%] 18:10:38 ==================================== ERRORS ==================================== 18:10:38 _________ ERROR at setup of TransportPCEtesting.test_01_connect_spdrA __________ 18:10:38 18:10:38 cls = 18:10:38 18:10:38 @classmethod 18:10:38 def setUpClass(cls): 18:10:38 # pylint: disable=unsubscriptable-object 18:10:38 cls.init_failed = False 18:10:38 os.environ['JAVA_MIN_MEM'] = '1024M' 18:10:38 os.environ['JAVA_MAX_MEM'] = '4096M' 18:10:38 cls.processes = test_utils.start_tpce() 18:10:38 # TAPI feature is not installed by default in Karaf 18:10:38 if 'NO_ODL_STARTUP' not in os.environ or 'USE_LIGHTY' not in os.environ or os.environ['USE_LIGHTY'] != 'True': 18:10:38 print('installing tapi feature...') 18:10:38 result = test_utils.install_karaf_feature('odl-transportpce-tapi') 18:10:38 if result.returncode != 0: 18:10:38 cls.init_failed = True 18:10:38 print('Restarting OpenDaylight...') 18:10:38 test_utils.shutdown_process(cls.processes[0]) 18:10:38 cls.processes[0] = test_utils.start_karaf() 18:10:38 test_utils.process_list[0] = cls.processes[0] 18:10:38 cls.init_failed = not test_utils.wait_until_log_contains( 18:10:38 test_utils.KARAF_LOG, test_utils.KARAF_OK_START_MSG, time_to_wait=60) 18:10:38 if cls.init_failed: 18:10:38 print('tapi installation feature failed...') 18:10:38 test_utils.shutdown_process(cls.processes[0]) 18:10:38 sys.exit(2) 18:10:38 > cls.processes = test_utils.start_sims([('spdra', cls.NODE_VERSION), 18:10:38 ('roadma', cls.NODE_VERSION), 18:10:38 ('roadmc', cls.NODE_VERSION), 18:10:38 ('spdrc', cls.NODE_VERSION)]) 18:10:38 18:10:38 transportpce_tests/network/test01_topo_extension.py:158: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 sims_list = [('spdra', '2.2.1'), ('roadma', '2.2.1'), ('roadmc', '2.2.1'), ('spdrc', '2.2.1')] 18:10:38 18:10:38 def start_sims(sims_list): 18:10:38 if SIMS_TO_USE == 'None': 18:10:38 return None 18:10:38 if SIMS_TO_USE == 'honeynode': 18:10:38 start_msg = HONEYNODE_OK_START_MSG 18:10:38 start_method = start_honeynode 18:10:38 else: 18:10:38 start_msg = LIGHTYNODE_OK_START_MSG 18:10:38 start_method = start_lightynode 18:10:38 for sim in sims_list: 18:10:38 print('starting simulator ' + sim[0] + ' in OpenROADM device version ' + sim[1] + '...') 18:10:38 log_file = os.path.join(SIM_LOG_DIRECTORY, SIMS[sim]['logfile']) 18:10:38 process = start_method(log_file, sim) 18:10:38 if wait_until_log_contains(log_file, start_msg, 100): 18:10:38 print('simulator for ' + sim[0] + ' started') 18:10:38 else: 18:10:38 print('simulator for ' + sim[0] + ' failed to start') 18:10:38 shutdown_process(process) 18:10:38 for pid in process_list: 18:10:38 shutdown_process(pid) 18:10:38 > sys.exit(3) 18:10:38 E SystemExit: 3 18:10:38 18:10:38 transportpce_tests/common/test_utils.py:206: SystemExit 18:10:38 ---------------------------- Captured stdout setup ----------------------------- 18:10:38 starting OpenDaylight... 18:10:38 starting KARAF TransportPCE build... 18:10:38 Searching for pattern 'Transportpce controller started' in karaf.log... Pattern found! OpenDaylight started ! 18:10:38 installing tapi feature... 18:10:38 installing feature odl-transportpce-tapi 18:10:38 client: JAVA_HOME not set; results may vary 18:10:38 odl-transportpce-tapi │ 10.0.0.SNAPSHOT │ x │ Started │ odl-transportpce-tapi │ OpenDaylight :: transportpce :: tapi 18:10:38 Restarting OpenDaylight... 18:10:38 starting KARAF TransportPCE build... 18:10:38 Searching for pattern 'Transportpce controller started' in karaf.log... Pattern found! starting simulator spdra in OpenROADM device version 2.2.1... 18:10:38 Searching for pattern 'Data tree change listeners registered' in spdra-221.log... Pattern found! simulator for spdra started 18:10:38 starting simulator roadma in OpenROADM device version 2.2.1... 18:10:38 Searching for pattern 'Data tree change listeners registered' in roadma-221.log... Pattern not found after 100 seconds! simulator for roadma failed to start 18:10:38 ---------------------------- Captured stderr setup ----------------------------- 18:10:38 SLF4J(W): No SLF4J providers were found. 18:10:38 SLF4J(W): Defaulting to no-operation (NOP) logger implementation 18:10:38 SLF4J(W): See https://www.slf4j.org/codes.html#noProviders for further details. 18:10:38 SLF4J(W): Class path contains SLF4J bindings targeting slf4j-api versions 1.7.x or earlier. 18:10:38 SLF4J(W): Ignoring binding found at [jar:file:/w/workspace/transportpce-tox-verify-transportpce-master/karaf221/target/assembly/system/org/apache/karaf/org.apache.karaf.client/4.4.6/org.apache.karaf.client-4.4.6.jar!/org/slf4j/impl/StaticLoggerBinder.class] 18:10:38 SLF4J(W): See https://www.slf4j.org/codes.html#ignoredBindings for an explanation. 18:10:38 _________ ERROR at setup of TransportPCEtesting.test_02_connect_spdrC __________ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 > return fun(self, *args, **kwargs) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:508: in wrapper 18:10:38 raise raise_from(err, None) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:506: in wrapper 18:10:38 return fun(self) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1780: in _parse_stat_file 18:10:38 data = bcat("%s/%s/stat" % (self._procfs_path, self.pid)) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:851: in bcat 18:10:38 return cat(fname, fallback=fallback, _open=open_binary) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:839: in cat 18:10:38 with _open(fname) as f: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 fname = '/proc/42762/stat' 18:10:38 18:10:38 def open_binary(fname): 18:10:38 > return open(fname, "rb", buffering=FILE_READ_BUFFER_SIZE) 18:10:38 E FileNotFoundError: [Errno 2] No such file or directory: '/proc/42762/stat' 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:799: FileNotFoundError 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 > self.create_time() 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:355: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:757: in create_time 18:10:38 self._create_time = self._proc.create_time() 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: in wrapper 18:10:38 return fun(self, *args, **kwargs) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1948: in create_time 18:10:38 ctime = float(self._parse_stat_file()['create_time']) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 return fun(self, *args, **kwargs) 18:10:38 except PermissionError: 18:10:38 raise AccessDenied(self.pid, self._name) 18:10:38 except ProcessLookupError: 18:10:38 self._raise_if_zombie() 18:10:38 raise NoSuchProcess(self.pid, self._name) 18:10:38 except FileNotFoundError: 18:10:38 self._raise_if_zombie() 18:10:38 if not os.path.exists("%s/%s" % (self._procfs_path, self.pid)): 18:10:38 > raise NoSuchProcess(self.pid, self._name) 18:10:38 E psutil.NoSuchProcess: process no longer exists (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1726: NoSuchProcess 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 cls = 18:10:38 18:10:38 @classmethod 18:10:38 def setUpClass(cls): 18:10:38 # pylint: disable=unsubscriptable-object 18:10:38 cls.init_failed = False 18:10:38 os.environ['JAVA_MIN_MEM'] = '1024M' 18:10:38 os.environ['JAVA_MAX_MEM'] = '4096M' 18:10:38 cls.processes = test_utils.start_tpce() 18:10:38 # TAPI feature is not installed by default in Karaf 18:10:38 if 'NO_ODL_STARTUP' not in os.environ or 'USE_LIGHTY' not in os.environ or os.environ['USE_LIGHTY'] != 'True': 18:10:38 print('installing tapi feature...') 18:10:38 result = test_utils.install_karaf_feature('odl-transportpce-tapi') 18:10:38 if result.returncode != 0: 18:10:38 cls.init_failed = True 18:10:38 print('Restarting OpenDaylight...') 18:10:38 > test_utils.shutdown_process(cls.processes[0]) 18:10:38 18:10:38 transportpce_tests/network/test01_topo_extension.py:149: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 transportpce_tests/common/test_utils.py:270: in shutdown_process 18:10:38 for child in psutil.Process(process.pid).children(): 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:319: in __init__ 18:10:38 self._init(pid) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 self.create_time() 18:10:38 except AccessDenied: 18:10:38 # We should never get here as AFAIK we're able to get 18:10:38 # process creation time on all platforms even as a 18:10:38 # limited user. 18:10:38 pass 18:10:38 except ZombieProcess: 18:10:38 # Zombies can still be queried by this class (although 18:10:38 # not always) and pids() return them so just go on. 18:10:38 pass 18:10:38 except NoSuchProcess: 18:10:38 if not _ignore_nsp: 18:10:38 msg = "process PID not found" 18:10:38 > raise NoSuchProcess(pid, msg=msg) 18:10:38 E psutil.NoSuchProcess: process PID not found (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:368: NoSuchProcess 18:10:38 ---------------------------- Captured stdout setup ----------------------------- 18:10:38 starting OpenDaylight... 18:10:38 starting KARAF TransportPCE build... 18:10:38 Searching for pattern 'Transportpce controller started' in karaf.log... Pattern found! OpenDaylight started ! 18:10:38 installing tapi feature... 18:10:38 installing feature odl-transportpce-tapi 18:10:38 client: JAVA_HOME not set; results may vary 18:10:38 odl-transportpce-tapi │ 10.0.0.SNAPSHOT │ x │ Started │ odl-transportpce-tapi │ OpenDaylight :: transportpce :: tapi 18:10:38 Restarting OpenDaylight... 18:10:38 ---------------------------- Captured stderr setup ----------------------------- 18:10:38 SLF4J(W): No SLF4J providers were found. 18:10:38 SLF4J(W): Defaulting to no-operation (NOP) logger implementation 18:10:38 SLF4J(W): See https://www.slf4j.org/codes.html#noProviders for further details. 18:10:38 SLF4J(W): Class path contains SLF4J bindings targeting slf4j-api versions 1.7.x or earlier. 18:10:38 SLF4J(W): Ignoring binding found at [jar:file:/w/workspace/transportpce-tox-verify-transportpce-master/karaf221/target/assembly/system/org/apache/karaf/org.apache.karaf.client/4.4.6/org.apache.karaf.client-4.4.6.jar!/org/slf4j/impl/StaticLoggerBinder.class] 18:10:38 SLF4J(W): See https://www.slf4j.org/codes.html#ignoredBindings for an explanation. 18:10:38 __________ ERROR at setup of TransportPCEtesting.test_03_connect_rdmA __________ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 > return fun(self, *args, **kwargs) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:508: in wrapper 18:10:38 raise raise_from(err, None) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:506: in wrapper 18:10:38 return fun(self) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1780: in _parse_stat_file 18:10:38 data = bcat("%s/%s/stat" % (self._procfs_path, self.pid)) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:851: in bcat 18:10:38 return cat(fname, fallback=fallback, _open=open_binary) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:839: in cat 18:10:38 with _open(fname) as f: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 fname = '/proc/42762/stat' 18:10:38 18:10:38 def open_binary(fname): 18:10:38 > return open(fname, "rb", buffering=FILE_READ_BUFFER_SIZE) 18:10:38 E FileNotFoundError: [Errno 2] No such file or directory: '/proc/42762/stat' 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:799: FileNotFoundError 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 > self.create_time() 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:355: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:757: in create_time 18:10:38 self._create_time = self._proc.create_time() 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: in wrapper 18:10:38 return fun(self, *args, **kwargs) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1948: in create_time 18:10:38 ctime = float(self._parse_stat_file()['create_time']) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 return fun(self, *args, **kwargs) 18:10:38 except PermissionError: 18:10:38 raise AccessDenied(self.pid, self._name) 18:10:38 except ProcessLookupError: 18:10:38 self._raise_if_zombie() 18:10:38 raise NoSuchProcess(self.pid, self._name) 18:10:38 except FileNotFoundError: 18:10:38 self._raise_if_zombie() 18:10:38 if not os.path.exists("%s/%s" % (self._procfs_path, self.pid)): 18:10:38 > raise NoSuchProcess(self.pid, self._name) 18:10:38 E psutil.NoSuchProcess: process no longer exists (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1726: NoSuchProcess 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 cls = 18:10:38 18:10:38 @classmethod 18:10:38 def setUpClass(cls): 18:10:38 # pylint: disable=unsubscriptable-object 18:10:38 cls.init_failed = False 18:10:38 os.environ['JAVA_MIN_MEM'] = '1024M' 18:10:38 os.environ['JAVA_MAX_MEM'] = '4096M' 18:10:38 cls.processes = test_utils.start_tpce() 18:10:38 # TAPI feature is not installed by default in Karaf 18:10:38 if 'NO_ODL_STARTUP' not in os.environ or 'USE_LIGHTY' not in os.environ or os.environ['USE_LIGHTY'] != 'True': 18:10:38 print('installing tapi feature...') 18:10:38 result = test_utils.install_karaf_feature('odl-transportpce-tapi') 18:10:38 if result.returncode != 0: 18:10:38 cls.init_failed = True 18:10:38 print('Restarting OpenDaylight...') 18:10:38 > test_utils.shutdown_process(cls.processes[0]) 18:10:38 18:10:38 transportpce_tests/network/test01_topo_extension.py:149: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 transportpce_tests/common/test_utils.py:270: in shutdown_process 18:10:38 for child in psutil.Process(process.pid).children(): 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:319: in __init__ 18:10:38 self._init(pid) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 self.create_time() 18:10:38 except AccessDenied: 18:10:38 # We should never get here as AFAIK we're able to get 18:10:38 # process creation time on all platforms even as a 18:10:38 # limited user. 18:10:38 pass 18:10:38 except ZombieProcess: 18:10:38 # Zombies can still be queried by this class (although 18:10:38 # not always) and pids() return them so just go on. 18:10:38 pass 18:10:38 except NoSuchProcess: 18:10:38 if not _ignore_nsp: 18:10:38 msg = "process PID not found" 18:10:38 > raise NoSuchProcess(pid, msg=msg) 18:10:38 E psutil.NoSuchProcess: process PID not found (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:368: NoSuchProcess 18:10:38 __________ ERROR at setup of TransportPCEtesting.test_04_connect_rdmC __________ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 > return fun(self, *args, **kwargs) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:508: in wrapper 18:10:38 raise raise_from(err, None) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:506: in wrapper 18:10:38 return fun(self) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1780: in _parse_stat_file 18:10:38 data = bcat("%s/%s/stat" % (self._procfs_path, self.pid)) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:851: in bcat 18:10:38 return cat(fname, fallback=fallback, _open=open_binary) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:839: in cat 18:10:38 with _open(fname) as f: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 fname = '/proc/42762/stat' 18:10:38 18:10:38 def open_binary(fname): 18:10:38 > return open(fname, "rb", buffering=FILE_READ_BUFFER_SIZE) 18:10:38 E FileNotFoundError: [Errno 2] No such file or directory: '/proc/42762/stat' 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:799: FileNotFoundError 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 > self.create_time() 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:355: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:757: in create_time 18:10:38 self._create_time = self._proc.create_time() 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: in wrapper 18:10:38 return fun(self, *args, **kwargs) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1948: in create_time 18:10:38 ctime = float(self._parse_stat_file()['create_time']) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 return fun(self, *args, **kwargs) 18:10:38 except PermissionError: 18:10:38 raise AccessDenied(self.pid, self._name) 18:10:38 except ProcessLookupError: 18:10:38 self._raise_if_zombie() 18:10:38 raise NoSuchProcess(self.pid, self._name) 18:10:38 except FileNotFoundError: 18:10:38 self._raise_if_zombie() 18:10:38 if not os.path.exists("%s/%s" % (self._procfs_path, self.pid)): 18:10:38 > raise NoSuchProcess(self.pid, self._name) 18:10:38 E psutil.NoSuchProcess: process no longer exists (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1726: NoSuchProcess 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 cls = 18:10:38 18:10:38 @classmethod 18:10:38 def setUpClass(cls): 18:10:38 # pylint: disable=unsubscriptable-object 18:10:38 cls.init_failed = False 18:10:38 os.environ['JAVA_MIN_MEM'] = '1024M' 18:10:38 os.environ['JAVA_MAX_MEM'] = '4096M' 18:10:38 cls.processes = test_utils.start_tpce() 18:10:38 # TAPI feature is not installed by default in Karaf 18:10:38 if 'NO_ODL_STARTUP' not in os.environ or 'USE_LIGHTY' not in os.environ or os.environ['USE_LIGHTY'] != 'True': 18:10:38 print('installing tapi feature...') 18:10:38 result = test_utils.install_karaf_feature('odl-transportpce-tapi') 18:10:38 if result.returncode != 0: 18:10:38 cls.init_failed = True 18:10:38 print('Restarting OpenDaylight...') 18:10:38 > test_utils.shutdown_process(cls.processes[0]) 18:10:38 18:10:38 transportpce_tests/network/test01_topo_extension.py:149: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 transportpce_tests/common/test_utils.py:270: in shutdown_process 18:10:38 for child in psutil.Process(process.pid).children(): 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:319: in __init__ 18:10:38 self._init(pid) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 self.create_time() 18:10:38 except AccessDenied: 18:10:38 # We should never get here as AFAIK we're able to get 18:10:38 # process creation time on all platforms even as a 18:10:38 # limited user. 18:10:38 pass 18:10:38 except ZombieProcess: 18:10:38 # Zombies can still be queried by this class (although 18:10:38 # not always) and pids() return them so just go on. 18:10:38 pass 18:10:38 except NoSuchProcess: 18:10:38 if not _ignore_nsp: 18:10:38 msg = "process PID not found" 18:10:38 > raise NoSuchProcess(pid, msg=msg) 18:10:38 E psutil.NoSuchProcess: process PID not found (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:368: NoSuchProcess 18:10:38 _ ERROR at setup of TransportPCEtesting.test_05_connect_sprdA_1_N1_to_TAPI_EXT_roadmTA1_PP1 _ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 > return fun(self, *args, **kwargs) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:508: in wrapper 18:10:38 raise raise_from(err, None) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:506: in wrapper 18:10:38 return fun(self) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1780: in _parse_stat_file 18:10:38 data = bcat("%s/%s/stat" % (self._procfs_path, self.pid)) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:851: in bcat 18:10:38 return cat(fname, fallback=fallback, _open=open_binary) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:839: in cat 18:10:38 with _open(fname) as f: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 fname = '/proc/42762/stat' 18:10:38 18:10:38 def open_binary(fname): 18:10:38 > return open(fname, "rb", buffering=FILE_READ_BUFFER_SIZE) 18:10:38 E FileNotFoundError: [Errno 2] No such file or directory: '/proc/42762/stat' 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:799: FileNotFoundError 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 > self.create_time() 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:355: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:757: in create_time 18:10:38 self._create_time = self._proc.create_time() 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: in wrapper 18:10:38 return fun(self, *args, **kwargs) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1948: in create_time 18:10:38 ctime = float(self._parse_stat_file()['create_time']) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 return fun(self, *args, **kwargs) 18:10:38 except PermissionError: 18:10:38 raise AccessDenied(self.pid, self._name) 18:10:38 except ProcessLookupError: 18:10:38 self._raise_if_zombie() 18:10:38 raise NoSuchProcess(self.pid, self._name) 18:10:38 except FileNotFoundError: 18:10:38 self._raise_if_zombie() 18:10:38 if not os.path.exists("%s/%s" % (self._procfs_path, self.pid)): 18:10:38 > raise NoSuchProcess(self.pid, self._name) 18:10:38 E psutil.NoSuchProcess: process no longer exists (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1726: NoSuchProcess 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 cls = 18:10:38 18:10:38 @classmethod 18:10:38 def setUpClass(cls): 18:10:38 # pylint: disable=unsubscriptable-object 18:10:38 cls.init_failed = False 18:10:38 os.environ['JAVA_MIN_MEM'] = '1024M' 18:10:38 os.environ['JAVA_MAX_MEM'] = '4096M' 18:10:38 cls.processes = test_utils.start_tpce() 18:10:38 # TAPI feature is not installed by default in Karaf 18:10:38 if 'NO_ODL_STARTUP' not in os.environ or 'USE_LIGHTY' not in os.environ or os.environ['USE_LIGHTY'] != 'True': 18:10:38 print('installing tapi feature...') 18:10:38 result = test_utils.install_karaf_feature('odl-transportpce-tapi') 18:10:38 if result.returncode != 0: 18:10:38 cls.init_failed = True 18:10:38 print('Restarting OpenDaylight...') 18:10:38 > test_utils.shutdown_process(cls.processes[0]) 18:10:38 18:10:38 transportpce_tests/network/test01_topo_extension.py:149: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 transportpce_tests/common/test_utils.py:270: in shutdown_process 18:10:38 for child in psutil.Process(process.pid).children(): 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:319: in __init__ 18:10:38 self._init(pid) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 self.create_time() 18:10:38 except AccessDenied: 18:10:38 # We should never get here as AFAIK we're able to get 18:10:38 # process creation time on all platforms even as a 18:10:38 # limited user. 18:10:38 pass 18:10:38 except ZombieProcess: 18:10:38 # Zombies can still be queried by this class (although 18:10:38 # not always) and pids() return them so just go on. 18:10:38 pass 18:10:38 except NoSuchProcess: 18:10:38 if not _ignore_nsp: 18:10:38 msg = "process PID not found" 18:10:38 > raise NoSuchProcess(pid, msg=msg) 18:10:38 E psutil.NoSuchProcess: process PID not found (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:368: NoSuchProcess 18:10:38 _ ERROR at setup of TransportPCEtesting.test_06_connect_TAPI_EXT_roadmTA1_PP1_to_spdrA_1_N1 _ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 > return fun(self, *args, **kwargs) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:508: in wrapper 18:10:38 raise raise_from(err, None) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:506: in wrapper 18:10:38 return fun(self) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1780: in _parse_stat_file 18:10:38 data = bcat("%s/%s/stat" % (self._procfs_path, self.pid)) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:851: in bcat 18:10:38 return cat(fname, fallback=fallback, _open=open_binary) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:839: in cat 18:10:38 with _open(fname) as f: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 fname = '/proc/42762/stat' 18:10:38 18:10:38 def open_binary(fname): 18:10:38 > return open(fname, "rb", buffering=FILE_READ_BUFFER_SIZE) 18:10:38 E FileNotFoundError: [Errno 2] No such file or directory: '/proc/42762/stat' 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:799: FileNotFoundError 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 > self.create_time() 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:355: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:757: in create_time 18:10:38 self._create_time = self._proc.create_time() 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: in wrapper 18:10:38 return fun(self, *args, **kwargs) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1948: in create_time 18:10:38 ctime = float(self._parse_stat_file()['create_time']) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 return fun(self, *args, **kwargs) 18:10:38 except PermissionError: 18:10:38 raise AccessDenied(self.pid, self._name) 18:10:38 except ProcessLookupError: 18:10:38 self._raise_if_zombie() 18:10:38 raise NoSuchProcess(self.pid, self._name) 18:10:38 except FileNotFoundError: 18:10:38 self._raise_if_zombie() 18:10:38 if not os.path.exists("%s/%s" % (self._procfs_path, self.pid)): 18:10:38 > raise NoSuchProcess(self.pid, self._name) 18:10:38 E psutil.NoSuchProcess: process no longer exists (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1726: NoSuchProcess 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 cls = 18:10:38 18:10:38 @classmethod 18:10:38 def setUpClass(cls): 18:10:38 # pylint: disable=unsubscriptable-object 18:10:38 cls.init_failed = False 18:10:38 os.environ['JAVA_MIN_MEM'] = '1024M' 18:10:38 os.environ['JAVA_MAX_MEM'] = '4096M' 18:10:38 cls.processes = test_utils.start_tpce() 18:10:38 # TAPI feature is not installed by default in Karaf 18:10:38 if 'NO_ODL_STARTUP' not in os.environ or 'USE_LIGHTY' not in os.environ or os.environ['USE_LIGHTY'] != 'True': 18:10:38 print('installing tapi feature...') 18:10:38 result = test_utils.install_karaf_feature('odl-transportpce-tapi') 18:10:38 if result.returncode != 0: 18:10:38 cls.init_failed = True 18:10:38 print('Restarting OpenDaylight...') 18:10:38 > test_utils.shutdown_process(cls.processes[0]) 18:10:38 18:10:38 transportpce_tests/network/test01_topo_extension.py:149: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 transportpce_tests/common/test_utils.py:270: in shutdown_process 18:10:38 for child in psutil.Process(process.pid).children(): 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:319: in __init__ 18:10:38 self._init(pid) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 self.create_time() 18:10:38 except AccessDenied: 18:10:38 # We should never get here as AFAIK we're able to get 18:10:38 # process creation time on all platforms even as a 18:10:38 # limited user. 18:10:38 pass 18:10:38 except ZombieProcess: 18:10:38 # Zombies can still be queried by this class (although 18:10:38 # not always) and pids() return them so just go on. 18:10:38 pass 18:10:38 except NoSuchProcess: 18:10:38 if not _ignore_nsp: 18:10:38 msg = "process PID not found" 18:10:38 > raise NoSuchProcess(pid, msg=msg) 18:10:38 E psutil.NoSuchProcess: process PID not found (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:368: NoSuchProcess 18:10:38 _ ERROR at setup of TransportPCEtesting.test_07_connect_sprdC_1_N1_to_TAPI_EXT_roadmTC1_PP1 _ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 > return fun(self, *args, **kwargs) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:508: in wrapper 18:10:38 raise raise_from(err, None) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:506: in wrapper 18:10:38 return fun(self) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1780: in _parse_stat_file 18:10:38 data = bcat("%s/%s/stat" % (self._procfs_path, self.pid)) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:851: in bcat 18:10:38 return cat(fname, fallback=fallback, _open=open_binary) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:839: in cat 18:10:38 with _open(fname) as f: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 fname = '/proc/42762/stat' 18:10:38 18:10:38 def open_binary(fname): 18:10:38 > return open(fname, "rb", buffering=FILE_READ_BUFFER_SIZE) 18:10:38 E FileNotFoundError: [Errno 2] No such file or directory: '/proc/42762/stat' 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:799: FileNotFoundError 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 > self.create_time() 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:355: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:757: in create_time 18:10:38 self._create_time = self._proc.create_time() 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: in wrapper 18:10:38 return fun(self, *args, **kwargs) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1948: in create_time 18:10:38 ctime = float(self._parse_stat_file()['create_time']) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 return fun(self, *args, **kwargs) 18:10:38 except PermissionError: 18:10:38 raise AccessDenied(self.pid, self._name) 18:10:38 except ProcessLookupError: 18:10:38 self._raise_if_zombie() 18:10:38 raise NoSuchProcess(self.pid, self._name) 18:10:38 except FileNotFoundError: 18:10:38 self._raise_if_zombie() 18:10:38 if not os.path.exists("%s/%s" % (self._procfs_path, self.pid)): 18:10:38 > raise NoSuchProcess(self.pid, self._name) 18:10:38 E psutil.NoSuchProcess: process no longer exists (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1726: NoSuchProcess 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 cls = 18:10:38 18:10:38 @classmethod 18:10:38 def setUpClass(cls): 18:10:38 # pylint: disable=unsubscriptable-object 18:10:38 cls.init_failed = False 18:10:38 os.environ['JAVA_MIN_MEM'] = '1024M' 18:10:38 os.environ['JAVA_MAX_MEM'] = '4096M' 18:10:38 cls.processes = test_utils.start_tpce() 18:10:38 # TAPI feature is not installed by default in Karaf 18:10:38 if 'NO_ODL_STARTUP' not in os.environ or 'USE_LIGHTY' not in os.environ or os.environ['USE_LIGHTY'] != 'True': 18:10:38 print('installing tapi feature...') 18:10:38 result = test_utils.install_karaf_feature('odl-transportpce-tapi') 18:10:38 if result.returncode != 0: 18:10:38 cls.init_failed = True 18:10:38 print('Restarting OpenDaylight...') 18:10:38 > test_utils.shutdown_process(cls.processes[0]) 18:10:38 18:10:38 transportpce_tests/network/test01_topo_extension.py:149: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 transportpce_tests/common/test_utils.py:270: in shutdown_process 18:10:38 for child in psutil.Process(process.pid).children(): 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:319: in __init__ 18:10:38 self._init(pid) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 self.create_time() 18:10:38 except AccessDenied: 18:10:38 # We should never get here as AFAIK we're able to get 18:10:38 # process creation time on all platforms even as a 18:10:38 # limited user. 18:10:38 pass 18:10:38 except ZombieProcess: 18:10:38 # Zombies can still be queried by this class (although 18:10:38 # not always) and pids() return them so just go on. 18:10:38 pass 18:10:38 except NoSuchProcess: 18:10:38 if not _ignore_nsp: 18:10:38 msg = "process PID not found" 18:10:38 > raise NoSuchProcess(pid, msg=msg) 18:10:38 E psutil.NoSuchProcess: process PID not found (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:368: NoSuchProcess 18:10:38 _ ERROR at setup of TransportPCEtesting.test_08_connect_TAPI_EXT_roadmTC1_PP1_to_spdrC_1_N1 _ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 > return fun(self, *args, **kwargs) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:508: in wrapper 18:10:38 raise raise_from(err, None) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:506: in wrapper 18:10:38 return fun(self) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1780: in _parse_stat_file 18:10:38 data = bcat("%s/%s/stat" % (self._procfs_path, self.pid)) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:851: in bcat 18:10:38 return cat(fname, fallback=fallback, _open=open_binary) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:839: in cat 18:10:38 with _open(fname) as f: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 fname = '/proc/42762/stat' 18:10:38 18:10:38 def open_binary(fname): 18:10:38 > return open(fname, "rb", buffering=FILE_READ_BUFFER_SIZE) 18:10:38 E FileNotFoundError: [Errno 2] No such file or directory: '/proc/42762/stat' 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:799: FileNotFoundError 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 > self.create_time() 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:355: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:757: in create_time 18:10:38 self._create_time = self._proc.create_time() 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: in wrapper 18:10:38 return fun(self, *args, **kwargs) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1948: in create_time 18:10:38 ctime = float(self._parse_stat_file()['create_time']) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 return fun(self, *args, **kwargs) 18:10:38 except PermissionError: 18:10:38 raise AccessDenied(self.pid, self._name) 18:10:38 except ProcessLookupError: 18:10:38 self._raise_if_zombie() 18:10:38 raise NoSuchProcess(self.pid, self._name) 18:10:38 except FileNotFoundError: 18:10:38 self._raise_if_zombie() 18:10:38 if not os.path.exists("%s/%s" % (self._procfs_path, self.pid)): 18:10:38 > raise NoSuchProcess(self.pid, self._name) 18:10:38 E psutil.NoSuchProcess: process no longer exists (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1726: NoSuchProcess 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 cls = 18:10:38 18:10:38 @classmethod 18:10:38 def setUpClass(cls): 18:10:38 # pylint: disable=unsubscriptable-object 18:10:38 cls.init_failed = False 18:10:38 os.environ['JAVA_MIN_MEM'] = '1024M' 18:10:38 os.environ['JAVA_MAX_MEM'] = '4096M' 18:10:38 cls.processes = test_utils.start_tpce() 18:10:38 # TAPI feature is not installed by default in Karaf 18:10:38 if 'NO_ODL_STARTUP' not in os.environ or 'USE_LIGHTY' not in os.environ or os.environ['USE_LIGHTY'] != 'True': 18:10:38 print('installing tapi feature...') 18:10:38 result = test_utils.install_karaf_feature('odl-transportpce-tapi') 18:10:38 if result.returncode != 0: 18:10:38 cls.init_failed = True 18:10:38 print('Restarting OpenDaylight...') 18:10:38 > test_utils.shutdown_process(cls.processes[0]) 18:10:38 18:10:38 transportpce_tests/network/test01_topo_extension.py:149: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 transportpce_tests/common/test_utils.py:270: in shutdown_process 18:10:38 for child in psutil.Process(process.pid).children(): 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:319: in __init__ 18:10:38 self._init(pid) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 self.create_time() 18:10:38 except AccessDenied: 18:10:38 # We should never get here as AFAIK we're able to get 18:10:38 # process creation time on all platforms even as a 18:10:38 # limited user. 18:10:38 pass 18:10:38 except ZombieProcess: 18:10:38 # Zombies can still be queried by this class (although 18:10:38 # not always) and pids() return them so just go on. 18:10:38 pass 18:10:38 except NoSuchProcess: 18:10:38 if not _ignore_nsp: 18:10:38 msg = "process PID not found" 18:10:38 > raise NoSuchProcess(pid, msg=msg) 18:10:38 E psutil.NoSuchProcess: process PID not found (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:368: NoSuchProcess 18:10:38 _______ ERROR at setup of TransportPCEtesting.test_09_check_otn_topology _______ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 > return fun(self, *args, **kwargs) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:508: in wrapper 18:10:38 raise raise_from(err, None) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:506: in wrapper 18:10:38 return fun(self) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1780: in _parse_stat_file 18:10:38 data = bcat("%s/%s/stat" % (self._procfs_path, self.pid)) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:851: in bcat 18:10:38 return cat(fname, fallback=fallback, _open=open_binary) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:839: in cat 18:10:38 with _open(fname) as f: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 fname = '/proc/42762/stat' 18:10:38 18:10:38 def open_binary(fname): 18:10:38 > return open(fname, "rb", buffering=FILE_READ_BUFFER_SIZE) 18:10:38 E FileNotFoundError: [Errno 2] No such file or directory: '/proc/42762/stat' 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:799: FileNotFoundError 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 > self.create_time() 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:355: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:757: in create_time 18:10:38 self._create_time = self._proc.create_time() 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: in wrapper 18:10:38 return fun(self, *args, **kwargs) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1948: in create_time 18:10:38 ctime = float(self._parse_stat_file()['create_time']) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 return fun(self, *args, **kwargs) 18:10:38 except PermissionError: 18:10:38 raise AccessDenied(self.pid, self._name) 18:10:38 except ProcessLookupError: 18:10:38 self._raise_if_zombie() 18:10:38 raise NoSuchProcess(self.pid, self._name) 18:10:38 except FileNotFoundError: 18:10:38 self._raise_if_zombie() 18:10:38 if not os.path.exists("%s/%s" % (self._procfs_path, self.pid)): 18:10:38 > raise NoSuchProcess(self.pid, self._name) 18:10:38 E psutil.NoSuchProcess: process no longer exists (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1726: NoSuchProcess 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 cls = 18:10:38 18:10:38 @classmethod 18:10:38 def setUpClass(cls): 18:10:38 # pylint: disable=unsubscriptable-object 18:10:38 cls.init_failed = False 18:10:38 os.environ['JAVA_MIN_MEM'] = '1024M' 18:10:38 os.environ['JAVA_MAX_MEM'] = '4096M' 18:10:38 cls.processes = test_utils.start_tpce() 18:10:38 # TAPI feature is not installed by default in Karaf 18:10:38 if 'NO_ODL_STARTUP' not in os.environ or 'USE_LIGHTY' not in os.environ or os.environ['USE_LIGHTY'] != 'True': 18:10:38 print('installing tapi feature...') 18:10:38 result = test_utils.install_karaf_feature('odl-transportpce-tapi') 18:10:38 if result.returncode != 0: 18:10:38 cls.init_failed = True 18:10:38 print('Restarting OpenDaylight...') 18:10:38 > test_utils.shutdown_process(cls.processes[0]) 18:10:38 18:10:38 transportpce_tests/network/test01_topo_extension.py:149: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 transportpce_tests/common/test_utils.py:270: in shutdown_process 18:10:38 for child in psutil.Process(process.pid).children(): 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:319: in __init__ 18:10:38 self._init(pid) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 self.create_time() 18:10:38 except AccessDenied: 18:10:38 # We should never get here as AFAIK we're able to get 18:10:38 # process creation time on all platforms even as a 18:10:38 # limited user. 18:10:38 pass 18:10:38 except ZombieProcess: 18:10:38 # Zombies can still be queried by this class (although 18:10:38 # not always) and pids() return them so just go on. 18:10:38 pass 18:10:38 except NoSuchProcess: 18:10:38 if not _ignore_nsp: 18:10:38 msg = "process PID not found" 18:10:38 > raise NoSuchProcess(pid, msg=msg) 18:10:38 E psutil.NoSuchProcess: process PID not found (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:368: NoSuchProcess 18:10:38 ____ ERROR at setup of TransportPCEtesting.test_10_check_openroadm_topology ____ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 > return fun(self, *args, **kwargs) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:508: in wrapper 18:10:38 raise raise_from(err, None) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:506: in wrapper 18:10:38 return fun(self) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1780: in _parse_stat_file 18:10:38 data = bcat("%s/%s/stat" % (self._procfs_path, self.pid)) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:851: in bcat 18:10:38 return cat(fname, fallback=fallback, _open=open_binary) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:839: in cat 18:10:38 with _open(fname) as f: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 fname = '/proc/42762/stat' 18:10:38 18:10:38 def open_binary(fname): 18:10:38 > return open(fname, "rb", buffering=FILE_READ_BUFFER_SIZE) 18:10:38 E FileNotFoundError: [Errno 2] No such file or directory: '/proc/42762/stat' 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:799: FileNotFoundError 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 > self.create_time() 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:355: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:757: in create_time 18:10:38 self._create_time = self._proc.create_time() 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: in wrapper 18:10:38 return fun(self, *args, **kwargs) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1948: in create_time 18:10:38 ctime = float(self._parse_stat_file()['create_time']) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 return fun(self, *args, **kwargs) 18:10:38 except PermissionError: 18:10:38 raise AccessDenied(self.pid, self._name) 18:10:38 except ProcessLookupError: 18:10:38 self._raise_if_zombie() 18:10:38 raise NoSuchProcess(self.pid, self._name) 18:10:38 except FileNotFoundError: 18:10:38 self._raise_if_zombie() 18:10:38 if not os.path.exists("%s/%s" % (self._procfs_path, self.pid)): 18:10:38 > raise NoSuchProcess(self.pid, self._name) 18:10:38 E psutil.NoSuchProcess: process no longer exists (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1726: NoSuchProcess 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 cls = 18:10:38 18:10:38 @classmethod 18:10:38 def setUpClass(cls): 18:10:38 # pylint: disable=unsubscriptable-object 18:10:38 cls.init_failed = False 18:10:38 os.environ['JAVA_MIN_MEM'] = '1024M' 18:10:38 os.environ['JAVA_MAX_MEM'] = '4096M' 18:10:38 cls.processes = test_utils.start_tpce() 18:10:38 # TAPI feature is not installed by default in Karaf 18:10:38 if 'NO_ODL_STARTUP' not in os.environ or 'USE_LIGHTY' not in os.environ or os.environ['USE_LIGHTY'] != 'True': 18:10:38 print('installing tapi feature...') 18:10:38 result = test_utils.install_karaf_feature('odl-transportpce-tapi') 18:10:38 if result.returncode != 0: 18:10:38 cls.init_failed = True 18:10:38 print('Restarting OpenDaylight...') 18:10:38 > test_utils.shutdown_process(cls.processes[0]) 18:10:38 18:10:38 transportpce_tests/network/test01_topo_extension.py:149: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 transportpce_tests/common/test_utils.py:270: in shutdown_process 18:10:38 for child in psutil.Process(process.pid).children(): 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:319: in __init__ 18:10:38 self._init(pid) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 self.create_time() 18:10:38 except AccessDenied: 18:10:38 # We should never get here as AFAIK we're able to get 18:10:38 # process creation time on all platforms even as a 18:10:38 # limited user. 18:10:38 pass 18:10:38 except ZombieProcess: 18:10:38 # Zombies can still be queried by this class (although 18:10:38 # not always) and pids() return them so just go on. 18:10:38 pass 18:10:38 except NoSuchProcess: 18:10:38 if not _ignore_nsp: 18:10:38 msg = "process PID not found" 18:10:38 > raise NoSuchProcess(pid, msg=msg) 18:10:38 E psutil.NoSuchProcess: process PID not found (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:368: NoSuchProcess 18:10:38 _ ERROR at setup of TransportPCEtesting.test_11_connect_RDMA1_to_TAPI_EXT_roadmTA1 _ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 > return fun(self, *args, **kwargs) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:508: in wrapper 18:10:38 raise raise_from(err, None) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:506: in wrapper 18:10:38 return fun(self) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1780: in _parse_stat_file 18:10:38 data = bcat("%s/%s/stat" % (self._procfs_path, self.pid)) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:851: in bcat 18:10:38 return cat(fname, fallback=fallback, _open=open_binary) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:839: in cat 18:10:38 with _open(fname) as f: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 fname = '/proc/42762/stat' 18:10:38 18:10:38 def open_binary(fname): 18:10:38 > return open(fname, "rb", buffering=FILE_READ_BUFFER_SIZE) 18:10:38 E FileNotFoundError: [Errno 2] No such file or directory: '/proc/42762/stat' 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:799: FileNotFoundError 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 > self.create_time() 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:355: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:757: in create_time 18:10:38 self._create_time = self._proc.create_time() 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: in wrapper 18:10:38 return fun(self, *args, **kwargs) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1948: in create_time 18:10:38 ctime = float(self._parse_stat_file()['create_time']) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 return fun(self, *args, **kwargs) 18:10:38 except PermissionError: 18:10:38 raise AccessDenied(self.pid, self._name) 18:10:38 except ProcessLookupError: 18:10:38 self._raise_if_zombie() 18:10:38 raise NoSuchProcess(self.pid, self._name) 18:10:38 except FileNotFoundError: 18:10:38 self._raise_if_zombie() 18:10:38 if not os.path.exists("%s/%s" % (self._procfs_path, self.pid)): 18:10:38 > raise NoSuchProcess(self.pid, self._name) 18:10:38 E psutil.NoSuchProcess: process no longer exists (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1726: NoSuchProcess 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 cls = 18:10:38 18:10:38 @classmethod 18:10:38 def setUpClass(cls): 18:10:38 # pylint: disable=unsubscriptable-object 18:10:38 cls.init_failed = False 18:10:38 os.environ['JAVA_MIN_MEM'] = '1024M' 18:10:38 os.environ['JAVA_MAX_MEM'] = '4096M' 18:10:38 cls.processes = test_utils.start_tpce() 18:10:38 # TAPI feature is not installed by default in Karaf 18:10:38 if 'NO_ODL_STARTUP' not in os.environ or 'USE_LIGHTY' not in os.environ or os.environ['USE_LIGHTY'] != 'True': 18:10:38 print('installing tapi feature...') 18:10:38 result = test_utils.install_karaf_feature('odl-transportpce-tapi') 18:10:38 if result.returncode != 0: 18:10:38 cls.init_failed = True 18:10:38 print('Restarting OpenDaylight...') 18:10:38 > test_utils.shutdown_process(cls.processes[0]) 18:10:38 18:10:38 transportpce_tests/network/test01_topo_extension.py:149: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 transportpce_tests/common/test_utils.py:270: in shutdown_process 18:10:38 for child in psutil.Process(process.pid).children(): 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:319: in __init__ 18:10:38 self._init(pid) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 self.create_time() 18:10:38 except AccessDenied: 18:10:38 # We should never get here as AFAIK we're able to get 18:10:38 # process creation time on all platforms even as a 18:10:38 # limited user. 18:10:38 pass 18:10:38 except ZombieProcess: 18:10:38 # Zombies can still be queried by this class (although 18:10:38 # not always) and pids() return them so just go on. 18:10:38 pass 18:10:38 except NoSuchProcess: 18:10:38 if not _ignore_nsp: 18:10:38 msg = "process PID not found" 18:10:38 > raise NoSuchProcess(pid, msg=msg) 18:10:38 E psutil.NoSuchProcess: process PID not found (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:368: NoSuchProcess 18:10:38 _ ERROR at setup of TransportPCEtesting.test_12_connect_RDMC1_to_TAPI_EXT_roadmTC1 _ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 > return fun(self, *args, **kwargs) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:508: in wrapper 18:10:38 raise raise_from(err, None) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:506: in wrapper 18:10:38 return fun(self) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1780: in _parse_stat_file 18:10:38 data = bcat("%s/%s/stat" % (self._procfs_path, self.pid)) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:851: in bcat 18:10:38 return cat(fname, fallback=fallback, _open=open_binary) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:839: in cat 18:10:38 with _open(fname) as f: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 fname = '/proc/42762/stat' 18:10:38 18:10:38 def open_binary(fname): 18:10:38 > return open(fname, "rb", buffering=FILE_READ_BUFFER_SIZE) 18:10:38 E FileNotFoundError: [Errno 2] No such file or directory: '/proc/42762/stat' 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:799: FileNotFoundError 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 > self.create_time() 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:355: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:757: in create_time 18:10:38 self._create_time = self._proc.create_time() 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: in wrapper 18:10:38 return fun(self, *args, **kwargs) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1948: in create_time 18:10:38 ctime = float(self._parse_stat_file()['create_time']) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 return fun(self, *args, **kwargs) 18:10:38 except PermissionError: 18:10:38 raise AccessDenied(self.pid, self._name) 18:10:38 except ProcessLookupError: 18:10:38 self._raise_if_zombie() 18:10:38 raise NoSuchProcess(self.pid, self._name) 18:10:38 except FileNotFoundError: 18:10:38 self._raise_if_zombie() 18:10:38 if not os.path.exists("%s/%s" % (self._procfs_path, self.pid)): 18:10:38 > raise NoSuchProcess(self.pid, self._name) 18:10:38 E psutil.NoSuchProcess: process no longer exists (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1726: NoSuchProcess 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 cls = 18:10:38 18:10:38 @classmethod 18:10:38 def setUpClass(cls): 18:10:38 # pylint: disable=unsubscriptable-object 18:10:38 cls.init_failed = False 18:10:38 os.environ['JAVA_MIN_MEM'] = '1024M' 18:10:38 os.environ['JAVA_MAX_MEM'] = '4096M' 18:10:38 cls.processes = test_utils.start_tpce() 18:10:38 # TAPI feature is not installed by default in Karaf 18:10:38 if 'NO_ODL_STARTUP' not in os.environ or 'USE_LIGHTY' not in os.environ or os.environ['USE_LIGHTY'] != 'True': 18:10:38 print('installing tapi feature...') 18:10:38 result = test_utils.install_karaf_feature('odl-transportpce-tapi') 18:10:38 if result.returncode != 0: 18:10:38 cls.init_failed = True 18:10:38 print('Restarting OpenDaylight...') 18:10:38 > test_utils.shutdown_process(cls.processes[0]) 18:10:38 18:10:38 transportpce_tests/network/test01_topo_extension.py:149: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 transportpce_tests/common/test_utils.py:270: in shutdown_process 18:10:38 for child in psutil.Process(process.pid).children(): 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:319: in __init__ 18:10:38 self._init(pid) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 self.create_time() 18:10:38 except AccessDenied: 18:10:38 # We should never get here as AFAIK we're able to get 18:10:38 # process creation time on all platforms even as a 18:10:38 # limited user. 18:10:38 pass 18:10:38 except ZombieProcess: 18:10:38 # Zombies can still be queried by this class (although 18:10:38 # not always) and pids() return them so just go on. 18:10:38 pass 18:10:38 except NoSuchProcess: 18:10:38 if not _ignore_nsp: 18:10:38 msg = "process PID not found" 18:10:38 > raise NoSuchProcess(pid, msg=msg) 18:10:38 E psutil.NoSuchProcess: process PID not found (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:368: NoSuchProcess 18:10:38 ___ ERROR at setup of TransportPCEtesting.test_13_getLinks_OpenroadmTopology ___ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 > return fun(self, *args, **kwargs) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:508: in wrapper 18:10:38 raise raise_from(err, None) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:506: in wrapper 18:10:38 return fun(self) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1780: in _parse_stat_file 18:10:38 data = bcat("%s/%s/stat" % (self._procfs_path, self.pid)) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:851: in bcat 18:10:38 return cat(fname, fallback=fallback, _open=open_binary) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:839: in cat 18:10:38 with _open(fname) as f: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 fname = '/proc/42762/stat' 18:10:38 18:10:38 def open_binary(fname): 18:10:38 > return open(fname, "rb", buffering=FILE_READ_BUFFER_SIZE) 18:10:38 E FileNotFoundError: [Errno 2] No such file or directory: '/proc/42762/stat' 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:799: FileNotFoundError 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 > self.create_time() 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:355: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:757: in create_time 18:10:38 self._create_time = self._proc.create_time() 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: in wrapper 18:10:38 return fun(self, *args, **kwargs) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1948: in create_time 18:10:38 ctime = float(self._parse_stat_file()['create_time']) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 return fun(self, *args, **kwargs) 18:10:38 except PermissionError: 18:10:38 raise AccessDenied(self.pid, self._name) 18:10:38 except ProcessLookupError: 18:10:38 self._raise_if_zombie() 18:10:38 raise NoSuchProcess(self.pid, self._name) 18:10:38 except FileNotFoundError: 18:10:38 self._raise_if_zombie() 18:10:38 if not os.path.exists("%s/%s" % (self._procfs_path, self.pid)): 18:10:38 > raise NoSuchProcess(self.pid, self._name) 18:10:38 E psutil.NoSuchProcess: process no longer exists (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1726: NoSuchProcess 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 cls = 18:10:38 18:10:38 @classmethod 18:10:38 def setUpClass(cls): 18:10:38 # pylint: disable=unsubscriptable-object 18:10:38 cls.init_failed = False 18:10:38 os.environ['JAVA_MIN_MEM'] = '1024M' 18:10:38 os.environ['JAVA_MAX_MEM'] = '4096M' 18:10:38 cls.processes = test_utils.start_tpce() 18:10:38 # TAPI feature is not installed by default in Karaf 18:10:38 if 'NO_ODL_STARTUP' not in os.environ or 'USE_LIGHTY' not in os.environ or os.environ['USE_LIGHTY'] != 'True': 18:10:38 print('installing tapi feature...') 18:10:38 result = test_utils.install_karaf_feature('odl-transportpce-tapi') 18:10:38 if result.returncode != 0: 18:10:38 cls.init_failed = True 18:10:38 print('Restarting OpenDaylight...') 18:10:38 > test_utils.shutdown_process(cls.processes[0]) 18:10:38 18:10:38 transportpce_tests/network/test01_topo_extension.py:149: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 transportpce_tests/common/test_utils.py:270: in shutdown_process 18:10:38 for child in psutil.Process(process.pid).children(): 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:319: in __init__ 18:10:38 self._init(pid) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 self.create_time() 18:10:38 except AccessDenied: 18:10:38 # We should never get here as AFAIK we're able to get 18:10:38 # process creation time on all platforms even as a 18:10:38 # limited user. 18:10:38 pass 18:10:38 except ZombieProcess: 18:10:38 # Zombies can still be queried by this class (although 18:10:38 # not always) and pids() return them so just go on. 18:10:38 pass 18:10:38 except NoSuchProcess: 18:10:38 if not _ignore_nsp: 18:10:38 msg = "process PID not found" 18:10:38 > raise NoSuchProcess(pid, msg=msg) 18:10:38 E psutil.NoSuchProcess: process PID not found (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:368: NoSuchProcess 18:10:38 ___ ERROR at setup of TransportPCEtesting.test_14_getNodes_OpenRoadmTopology ___ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 > return fun(self, *args, **kwargs) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:508: in wrapper 18:10:38 raise raise_from(err, None) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:506: in wrapper 18:10:38 return fun(self) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1780: in _parse_stat_file 18:10:38 data = bcat("%s/%s/stat" % (self._procfs_path, self.pid)) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:851: in bcat 18:10:38 return cat(fname, fallback=fallback, _open=open_binary) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:839: in cat 18:10:38 with _open(fname) as f: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 fname = '/proc/42762/stat' 18:10:38 18:10:38 def open_binary(fname): 18:10:38 > return open(fname, "rb", buffering=FILE_READ_BUFFER_SIZE) 18:10:38 E FileNotFoundError: [Errno 2] No such file or directory: '/proc/42762/stat' 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:799: FileNotFoundError 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 > self.create_time() 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:355: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:757: in create_time 18:10:38 self._create_time = self._proc.create_time() 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: in wrapper 18:10:38 return fun(self, *args, **kwargs) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1948: in create_time 18:10:38 ctime = float(self._parse_stat_file()['create_time']) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 return fun(self, *args, **kwargs) 18:10:38 except PermissionError: 18:10:38 raise AccessDenied(self.pid, self._name) 18:10:38 except ProcessLookupError: 18:10:38 self._raise_if_zombie() 18:10:38 raise NoSuchProcess(self.pid, self._name) 18:10:38 except FileNotFoundError: 18:10:38 self._raise_if_zombie() 18:10:38 if not os.path.exists("%s/%s" % (self._procfs_path, self.pid)): 18:10:38 > raise NoSuchProcess(self.pid, self._name) 18:10:38 E psutil.NoSuchProcess: process no longer exists (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1726: NoSuchProcess 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 cls = 18:10:38 18:10:38 @classmethod 18:10:38 def setUpClass(cls): 18:10:38 # pylint: disable=unsubscriptable-object 18:10:38 cls.init_failed = False 18:10:38 os.environ['JAVA_MIN_MEM'] = '1024M' 18:10:38 os.environ['JAVA_MAX_MEM'] = '4096M' 18:10:38 cls.processes = test_utils.start_tpce() 18:10:38 # TAPI feature is not installed by default in Karaf 18:10:38 if 'NO_ODL_STARTUP' not in os.environ or 'USE_LIGHTY' not in os.environ or os.environ['USE_LIGHTY'] != 'True': 18:10:38 print('installing tapi feature...') 18:10:38 result = test_utils.install_karaf_feature('odl-transportpce-tapi') 18:10:38 if result.returncode != 0: 18:10:38 cls.init_failed = True 18:10:38 print('Restarting OpenDaylight...') 18:10:38 > test_utils.shutdown_process(cls.processes[0]) 18:10:38 18:10:38 transportpce_tests/network/test01_topo_extension.py:149: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 transportpce_tests/common/test_utils.py:270: in shutdown_process 18:10:38 for child in psutil.Process(process.pid).children(): 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:319: in __init__ 18:10:38 self._init(pid) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 self.create_time() 18:10:38 except AccessDenied: 18:10:38 # We should never get here as AFAIK we're able to get 18:10:38 # process creation time on all platforms even as a 18:10:38 # limited user. 18:10:38 pass 18:10:38 except ZombieProcess: 18:10:38 # Zombies can still be queried by this class (although 18:10:38 # not always) and pids() return them so just go on. 18:10:38 pass 18:10:38 except NoSuchProcess: 18:10:38 if not _ignore_nsp: 18:10:38 msg = "process PID not found" 18:10:38 > raise NoSuchProcess(pid, msg=msg) 18:10:38 E psutil.NoSuchProcess: process PID not found (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:368: NoSuchProcess 18:10:38 ________ ERROR at setup of TransportPCEtesting.test_15_disconnect_spdrA ________ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 > return fun(self, *args, **kwargs) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:508: in wrapper 18:10:38 raise raise_from(err, None) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:506: in wrapper 18:10:38 return fun(self) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1780: in _parse_stat_file 18:10:38 data = bcat("%s/%s/stat" % (self._procfs_path, self.pid)) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:851: in bcat 18:10:38 return cat(fname, fallback=fallback, _open=open_binary) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:839: in cat 18:10:38 with _open(fname) as f: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 fname = '/proc/42762/stat' 18:10:38 18:10:38 def open_binary(fname): 18:10:38 > return open(fname, "rb", buffering=FILE_READ_BUFFER_SIZE) 18:10:38 E FileNotFoundError: [Errno 2] No such file or directory: '/proc/42762/stat' 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:799: FileNotFoundError 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 > self.create_time() 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:355: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:757: in create_time 18:10:38 self._create_time = self._proc.create_time() 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: in wrapper 18:10:38 return fun(self, *args, **kwargs) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1948: in create_time 18:10:38 ctime = float(self._parse_stat_file()['create_time']) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 return fun(self, *args, **kwargs) 18:10:38 except PermissionError: 18:10:38 raise AccessDenied(self.pid, self._name) 18:10:38 except ProcessLookupError: 18:10:38 self._raise_if_zombie() 18:10:38 raise NoSuchProcess(self.pid, self._name) 18:10:38 except FileNotFoundError: 18:10:38 self._raise_if_zombie() 18:10:38 if not os.path.exists("%s/%s" % (self._procfs_path, self.pid)): 18:10:38 > raise NoSuchProcess(self.pid, self._name) 18:10:38 E psutil.NoSuchProcess: process no longer exists (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1726: NoSuchProcess 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 cls = 18:10:38 18:10:38 @classmethod 18:10:38 def setUpClass(cls): 18:10:38 # pylint: disable=unsubscriptable-object 18:10:38 cls.init_failed = False 18:10:38 os.environ['JAVA_MIN_MEM'] = '1024M' 18:10:38 os.environ['JAVA_MAX_MEM'] = '4096M' 18:10:38 cls.processes = test_utils.start_tpce() 18:10:38 # TAPI feature is not installed by default in Karaf 18:10:38 if 'NO_ODL_STARTUP' not in os.environ or 'USE_LIGHTY' not in os.environ or os.environ['USE_LIGHTY'] != 'True': 18:10:38 print('installing tapi feature...') 18:10:38 result = test_utils.install_karaf_feature('odl-transportpce-tapi') 18:10:38 if result.returncode != 0: 18:10:38 cls.init_failed = True 18:10:38 print('Restarting OpenDaylight...') 18:10:38 > test_utils.shutdown_process(cls.processes[0]) 18:10:38 18:10:38 transportpce_tests/network/test01_topo_extension.py:149: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 transportpce_tests/common/test_utils.py:270: in shutdown_process 18:10:38 for child in psutil.Process(process.pid).children(): 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:319: in __init__ 18:10:38 self._init(pid) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 self.create_time() 18:10:38 except AccessDenied: 18:10:38 # We should never get here as AFAIK we're able to get 18:10:38 # process creation time on all platforms even as a 18:10:38 # limited user. 18:10:38 pass 18:10:38 except ZombieProcess: 18:10:38 # Zombies can still be queried by this class (although 18:10:38 # not always) and pids() return them so just go on. 18:10:38 pass 18:10:38 except NoSuchProcess: 18:10:38 if not _ignore_nsp: 18:10:38 msg = "process PID not found" 18:10:38 > raise NoSuchProcess(pid, msg=msg) 18:10:38 E psutil.NoSuchProcess: process PID not found (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:368: NoSuchProcess 18:10:38 ________ ERROR at setup of TransportPCEtesting.test_16_disconnect_spdrC ________ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 > return fun(self, *args, **kwargs) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:508: in wrapper 18:10:38 raise raise_from(err, None) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:506: in wrapper 18:10:38 return fun(self) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1780: in _parse_stat_file 18:10:38 data = bcat("%s/%s/stat" % (self._procfs_path, self.pid)) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:851: in bcat 18:10:38 return cat(fname, fallback=fallback, _open=open_binary) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:839: in cat 18:10:38 with _open(fname) as f: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 fname = '/proc/42762/stat' 18:10:38 18:10:38 def open_binary(fname): 18:10:38 > return open(fname, "rb", buffering=FILE_READ_BUFFER_SIZE) 18:10:38 E FileNotFoundError: [Errno 2] No such file or directory: '/proc/42762/stat' 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:799: FileNotFoundError 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 > self.create_time() 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:355: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:757: in create_time 18:10:38 self._create_time = self._proc.create_time() 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: in wrapper 18:10:38 return fun(self, *args, **kwargs) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1948: in create_time 18:10:38 ctime = float(self._parse_stat_file()['create_time']) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 return fun(self, *args, **kwargs) 18:10:38 except PermissionError: 18:10:38 raise AccessDenied(self.pid, self._name) 18:10:38 except ProcessLookupError: 18:10:38 self._raise_if_zombie() 18:10:38 raise NoSuchProcess(self.pid, self._name) 18:10:38 except FileNotFoundError: 18:10:38 self._raise_if_zombie() 18:10:38 if not os.path.exists("%s/%s" % (self._procfs_path, self.pid)): 18:10:38 > raise NoSuchProcess(self.pid, self._name) 18:10:38 E psutil.NoSuchProcess: process no longer exists (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1726: NoSuchProcess 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 cls = 18:10:38 18:10:38 @classmethod 18:10:38 def setUpClass(cls): 18:10:38 # pylint: disable=unsubscriptable-object 18:10:38 cls.init_failed = False 18:10:38 os.environ['JAVA_MIN_MEM'] = '1024M' 18:10:38 os.environ['JAVA_MAX_MEM'] = '4096M' 18:10:38 cls.processes = test_utils.start_tpce() 18:10:38 # TAPI feature is not installed by default in Karaf 18:10:38 if 'NO_ODL_STARTUP' not in os.environ or 'USE_LIGHTY' not in os.environ or os.environ['USE_LIGHTY'] != 'True': 18:10:38 print('installing tapi feature...') 18:10:38 result = test_utils.install_karaf_feature('odl-transportpce-tapi') 18:10:38 if result.returncode != 0: 18:10:38 cls.init_failed = True 18:10:38 print('Restarting OpenDaylight...') 18:10:38 > test_utils.shutdown_process(cls.processes[0]) 18:10:38 18:10:38 transportpce_tests/network/test01_topo_extension.py:149: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 transportpce_tests/common/test_utils.py:270: in shutdown_process 18:10:38 for child in psutil.Process(process.pid).children(): 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:319: in __init__ 18:10:38 self._init(pid) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 self.create_time() 18:10:38 except AccessDenied: 18:10:38 # We should never get here as AFAIK we're able to get 18:10:38 # process creation time on all platforms even as a 18:10:38 # limited user. 18:10:38 pass 18:10:38 except ZombieProcess: 18:10:38 # Zombies can still be queried by this class (although 18:10:38 # not always) and pids() return them so just go on. 18:10:38 pass 18:10:38 except NoSuchProcess: 18:10:38 if not _ignore_nsp: 18:10:38 msg = "process PID not found" 18:10:38 > raise NoSuchProcess(pid, msg=msg) 18:10:38 E psutil.NoSuchProcess: process PID not found (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:368: NoSuchProcess 18:10:38 _______ ERROR at setup of TransportPCEtesting.test_17_disconnect_roadmA ________ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 > return fun(self, *args, **kwargs) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:508: in wrapper 18:10:38 raise raise_from(err, None) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:506: in wrapper 18:10:38 return fun(self) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1780: in _parse_stat_file 18:10:38 data = bcat("%s/%s/stat" % (self._procfs_path, self.pid)) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:851: in bcat 18:10:38 return cat(fname, fallback=fallback, _open=open_binary) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:839: in cat 18:10:38 with _open(fname) as f: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 fname = '/proc/42762/stat' 18:10:38 18:10:38 def open_binary(fname): 18:10:38 > return open(fname, "rb", buffering=FILE_READ_BUFFER_SIZE) 18:10:38 E FileNotFoundError: [Errno 2] No such file or directory: '/proc/42762/stat' 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:799: FileNotFoundError 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 > self.create_time() 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:355: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:757: in create_time 18:10:38 self._create_time = self._proc.create_time() 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: in wrapper 18:10:38 return fun(self, *args, **kwargs) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1948: in create_time 18:10:38 ctime = float(self._parse_stat_file()['create_time']) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 return fun(self, *args, **kwargs) 18:10:38 except PermissionError: 18:10:38 raise AccessDenied(self.pid, self._name) 18:10:38 except ProcessLookupError: 18:10:38 self._raise_if_zombie() 18:10:38 raise NoSuchProcess(self.pid, self._name) 18:10:38 except FileNotFoundError: 18:10:38 self._raise_if_zombie() 18:10:38 if not os.path.exists("%s/%s" % (self._procfs_path, self.pid)): 18:10:38 > raise NoSuchProcess(self.pid, self._name) 18:10:38 E psutil.NoSuchProcess: process no longer exists (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1726: NoSuchProcess 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 cls = 18:10:38 18:10:38 @classmethod 18:10:38 def setUpClass(cls): 18:10:38 # pylint: disable=unsubscriptable-object 18:10:38 cls.init_failed = False 18:10:38 os.environ['JAVA_MIN_MEM'] = '1024M' 18:10:38 os.environ['JAVA_MAX_MEM'] = '4096M' 18:10:38 cls.processes = test_utils.start_tpce() 18:10:38 # TAPI feature is not installed by default in Karaf 18:10:38 if 'NO_ODL_STARTUP' not in os.environ or 'USE_LIGHTY' not in os.environ or os.environ['USE_LIGHTY'] != 'True': 18:10:38 print('installing tapi feature...') 18:10:38 result = test_utils.install_karaf_feature('odl-transportpce-tapi') 18:10:38 if result.returncode != 0: 18:10:38 cls.init_failed = True 18:10:38 print('Restarting OpenDaylight...') 18:10:38 > test_utils.shutdown_process(cls.processes[0]) 18:10:38 18:10:38 transportpce_tests/network/test01_topo_extension.py:149: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 transportpce_tests/common/test_utils.py:270: in shutdown_process 18:10:38 for child in psutil.Process(process.pid).children(): 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:319: in __init__ 18:10:38 self._init(pid) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 self.create_time() 18:10:38 except AccessDenied: 18:10:38 # We should never get here as AFAIK we're able to get 18:10:38 # process creation time on all platforms even as a 18:10:38 # limited user. 18:10:38 pass 18:10:38 except ZombieProcess: 18:10:38 # Zombies can still be queried by this class (although 18:10:38 # not always) and pids() return them so just go on. 18:10:38 pass 18:10:38 except NoSuchProcess: 18:10:38 if not _ignore_nsp: 18:10:38 msg = "process PID not found" 18:10:38 > raise NoSuchProcess(pid, msg=msg) 18:10:38 E psutil.NoSuchProcess: process PID not found (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:368: NoSuchProcess 18:10:38 _______ ERROR at setup of TransportPCEtesting.test_18_disconnect_roadmC ________ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 > return fun(self, *args, **kwargs) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:508: in wrapper 18:10:38 raise raise_from(err, None) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:506: in wrapper 18:10:38 return fun(self) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1780: in _parse_stat_file 18:10:38 data = bcat("%s/%s/stat" % (self._procfs_path, self.pid)) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:851: in bcat 18:10:38 return cat(fname, fallback=fallback, _open=open_binary) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:839: in cat 18:10:38 with _open(fname) as f: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 fname = '/proc/42762/stat' 18:10:38 18:10:38 def open_binary(fname): 18:10:38 > return open(fname, "rb", buffering=FILE_READ_BUFFER_SIZE) 18:10:38 E FileNotFoundError: [Errno 2] No such file or directory: '/proc/42762/stat' 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_common.py:799: FileNotFoundError 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 > self.create_time() 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:355: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:757: in create_time 18:10:38 self._create_time = self._proc.create_time() 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1717: in wrapper 18:10:38 return fun(self, *args, **kwargs) 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1948: in create_time 18:10:38 ctime = float(self._parse_stat_file()['create_time']) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = , args = () 18:10:38 kwargs = {} 18:10:38 18:10:38 @functools.wraps(fun) 18:10:38 def wrapper(self, *args, **kwargs): 18:10:38 try: 18:10:38 return fun(self, *args, **kwargs) 18:10:38 except PermissionError: 18:10:38 raise AccessDenied(self.pid, self._name) 18:10:38 except ProcessLookupError: 18:10:38 self._raise_if_zombie() 18:10:38 raise NoSuchProcess(self.pid, self._name) 18:10:38 except FileNotFoundError: 18:10:38 self._raise_if_zombie() 18:10:38 if not os.path.exists("%s/%s" % (self._procfs_path, self.pid)): 18:10:38 > raise NoSuchProcess(self.pid, self._name) 18:10:38 E psutil.NoSuchProcess: process no longer exists (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/_pslinux.py:1726: NoSuchProcess 18:10:38 18:10:38 During handling of the above exception, another exception occurred: 18:10:38 18:10:38 cls = 18:10:38 18:10:38 @classmethod 18:10:38 def setUpClass(cls): 18:10:38 # pylint: disable=unsubscriptable-object 18:10:38 cls.init_failed = False 18:10:38 os.environ['JAVA_MIN_MEM'] = '1024M' 18:10:38 os.environ['JAVA_MAX_MEM'] = '4096M' 18:10:38 cls.processes = test_utils.start_tpce() 18:10:38 # TAPI feature is not installed by default in Karaf 18:10:38 if 'NO_ODL_STARTUP' not in os.environ or 'USE_LIGHTY' not in os.environ or os.environ['USE_LIGHTY'] != 'True': 18:10:38 print('installing tapi feature...') 18:10:38 result = test_utils.install_karaf_feature('odl-transportpce-tapi') 18:10:38 if result.returncode != 0: 18:10:38 cls.init_failed = True 18:10:38 print('Restarting OpenDaylight...') 18:10:38 > test_utils.shutdown_process(cls.processes[0]) 18:10:38 18:10:38 transportpce_tests/network/test01_topo_extension.py:149: 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 transportpce_tests/common/test_utils.py:270: in shutdown_process 18:10:38 for child in psutil.Process(process.pid).children(): 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:319: in __init__ 18:10:38 self._init(pid) 18:10:38 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:10:38 18:10:38 self = psutil.Process(pid=42762, status='terminated'), pid = 42762 18:10:38 _ignore_nsp = False 18:10:38 18:10:38 def _init(self, pid, _ignore_nsp=False): 18:10:38 if pid is None: 18:10:38 pid = os.getpid() 18:10:38 else: 18:10:38 if not _PY3 and not isinstance(pid, (int, long)): 18:10:38 msg = "pid must be an integer (got %r)" % pid 18:10:38 raise TypeError(msg) 18:10:38 if pid < 0: 18:10:38 msg = "pid must be a positive integer (got %s)" % pid 18:10:38 raise ValueError(msg) 18:10:38 try: 18:10:38 _psplatform.cext.check_pid_range(pid) 18:10:38 except OverflowError: 18:10:38 msg = "process PID out of range (got %s)" % pid 18:10:38 raise NoSuchProcess(pid, msg=msg) 18:10:38 18:10:38 self._pid = pid 18:10:38 self._name = None 18:10:38 self._exe = None 18:10:38 self._create_time = None 18:10:38 self._gone = False 18:10:38 self._pid_reused = False 18:10:38 self._hash = None 18:10:38 self._lock = threading.RLock() 18:10:38 # used for caching on Windows only (on POSIX ppid may change) 18:10:38 self._ppid = None 18:10:38 # platform-specific modules define an _psplatform.Process 18:10:38 # implementation class 18:10:38 self._proc = _psplatform.Process(pid) 18:10:38 self._last_sys_cpu_times = None 18:10:38 self._last_proc_cpu_times = None 18:10:38 self._exitcode = _SENTINEL 18:10:38 # cache creation time for later use in is_running() method 18:10:38 try: 18:10:38 self.create_time() 18:10:38 except AccessDenied: 18:10:38 # We should never get here as AFAIK we're able to get 18:10:38 # process creation time on all platforms even as a 18:10:38 # limited user. 18:10:38 pass 18:10:38 except ZombieProcess: 18:10:38 # Zombies can still be queried by this class (although 18:10:38 # not always) and pids() return them so just go on. 18:10:38 pass 18:10:38 except NoSuchProcess: 18:10:38 if not _ignore_nsp: 18:10:38 msg = "process PID not found" 18:10:38 > raise NoSuchProcess(pid, msg=msg) 18:10:38 E psutil.NoSuchProcess: process PID not found (pid=42762) 18:10:38 18:10:38 ../.tox/tests_network/lib/python3.11/site-packages/psutil/__init__.py:368: NoSuchProcess 18:10:38 =========================== short test summary info ============================ 18:10:38 ERROR transportpce_tests/network/test01_topo_extension.py::TransportPCEtesting::test_01_connect_spdrA 18:10:38 ERROR transportpce_tests/network/test01_topo_extension.py::TransportPCEtesting::test_02_connect_spdrC 18:10:38 ERROR transportpce_tests/network/test01_topo_extension.py::TransportPCEtesting::test_03_connect_rdmA 18:10:38 ERROR transportpce_tests/network/test01_topo_extension.py::TransportPCEtesting::test_04_connect_rdmC 18:10:38 ERROR transportpce_tests/network/test01_topo_extension.py::TransportPCEtesting::test_05_connect_sprdA_1_N1_to_TAPI_EXT_roadmTA1_PP1 18:10:38 ERROR transportpce_tests/network/test01_topo_extension.py::TransportPCEtesting::test_06_connect_TAPI_EXT_roadmTA1_PP1_to_spdrA_1_N1 18:10:38 ERROR transportpce_tests/network/test01_topo_extension.py::TransportPCEtesting::test_07_connect_sprdC_1_N1_to_TAPI_EXT_roadmTC1_PP1 18:10:38 ERROR transportpce_tests/network/test01_topo_extension.py::TransportPCEtesting::test_08_connect_TAPI_EXT_roadmTC1_PP1_to_spdrC_1_N1 18:10:38 ERROR transportpce_tests/network/test01_topo_extension.py::TransportPCEtesting::test_09_check_otn_topology 18:10:38 ERROR transportpce_tests/network/test01_topo_extension.py::TransportPCEtesting::test_10_check_openroadm_topology 18:10:38 ERROR transportpce_tests/network/test01_topo_extension.py::TransportPCEtesting::test_11_connect_RDMA1_to_TAPI_EXT_roadmTA1 18:10:38 ERROR transportpce_tests/network/test01_topo_extension.py::TransportPCEtesting::test_12_connect_RDMC1_to_TAPI_EXT_roadmTC1 18:10:38 ERROR transportpce_tests/network/test01_topo_extension.py::TransportPCEtesting::test_13_getLinks_OpenroadmTopology 18:10:38 ERROR transportpce_tests/network/test01_topo_extension.py::TransportPCEtesting::test_14_getNodes_OpenRoadmTopology 18:10:38 ERROR transportpce_tests/network/test01_topo_extension.py::TransportPCEtesting::test_15_disconnect_spdrA 18:10:38 ERROR transportpce_tests/network/test01_topo_extension.py::TransportPCEtesting::test_16_disconnect_spdrC 18:10:38 ERROR transportpce_tests/network/test01_topo_extension.py::TransportPCEtesting::test_17_disconnect_roadmA 18:10:38 ERROR transportpce_tests/network/test01_topo_extension.py::TransportPCEtesting::test_18_disconnect_roadmC 18:10:38 18 errors in 153.98s (0:02:33) 18:10:38 tests_network: exit 1 (154.28 seconds) /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh network pid=41997 18:11:00 .......................................F.F.F.FFF..FF [100%] 18:12:34 =================================== FAILURES =================================== 18:12:34 ____________ TransportPCEtesting.test_40_getLinks_OpenRoadmTopology ____________ 18:12:34 18:12:34 self = 18:12:34 18:12:34 def test_40_getLinks_OpenRoadmTopology(self): 18:12:34 response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 18:12:34 self.assertEqual(response['status_code'], requests.codes.ok) 18:12:34 > self.assertEqual(len(response['network'][0]['ietf-network-topology:link']), 16) 18:12:34 E AssertionError: 18 != 16 18:12:34 18:12:34 transportpce_tests/2.2.1/test03_topology.py:769: AssertionError 18:12:34 _______________ TransportPCEtesting.test_43_getOpenRoadmNetwork ________________ 18:12:34 18:12:34 self = 18:12:34 18:12:34 def test_43_getOpenRoadmNetwork(self): 18:12:34 response = test_utils.get_ietf_network_request('openroadm-network', 'config') 18:12:34 self.assertEqual(response['status_code'], requests.codes.ok) 18:12:34 # Only TAPI-SBI-ABS-NODE created at initialization shall remain in the topology 18:12:34 > self.assertEqual(len(response['network'][0]['node']), 1) 18:12:34 E AssertionError: 2 != 1 18:12:34 18:12:34 transportpce_tests/2.2.1/test03_topology.py:815: AssertionError 18:12:34 ________ TransportPCEtesting.test_44_check_roadm2roadm_link_persistence ________ 18:12:34 18:12:34 self = 18:12:34 18:12:34 def test_44_check_roadm2roadm_link_persistence(self): 18:12:34 response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 18:12:34 self.assertEqual(response['status_code'], requests.codes.ok) 18:12:34 # Only TAPI-SBI-ABS-NODE created at initialization shall remain in the topology 18:12:34 > self.assertEqual(len(response['network'][0]['node']), 1) 18:12:34 E AssertionError: 5 != 1 18:12:34 18:12:34 transportpce_tests/2.2.1/test03_topology.py:821: AssertionError 18:12:34 --------------------------- Captured stdout teardown --------------------------- 18:12:34 all processes killed 18:12:34 =========================== short test summary info ============================ 18:12:34 FAILED transportpce_tests/2.2.1/test03_topology.py::TransportPCEtesting::test_40_getLinks_OpenRoadmTopology 18:12:34 FAILED transportpce_tests/2.2.1/test03_topology.py::TransportPCEtesting::test_43_getOpenRoadmNetwork 18:12:34 FAILED transportpce_tests/2.2.1/test03_topology.py::TransportPCEtesting::test_44_check_roadm2roadm_link_persistence 18:12:34 3 failed, 41 passed in 138.39s (0:02:18) 18:12:34 tests_network: FAIL ✖ in 2 minutes 40.94 seconds 18:12:34 tests221: exit 1 (269.72 seconds) /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 2.2.1 pid=42008 18:15:25 .FFFFFFFFFFFFFFFFFFFFFFF [100%] 18:16:19 =================================== FAILURES =================================== 18:16:19 ______________ TransportPCETopologyTesting.test_02_getClliNetwork ______________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_02_getClliNetwork(self): 18:16:19 response = test_utils.get_ietf_network_request('clli-network', 'config') 18:16:19 self.assertEqual(response['status_code'], requests.codes.ok) 18:16:19 > self.assertEqual(response['network'][0]['node'][0]['node-id'], 'NodeA') 18:16:19 E AssertionError: 'NodeC' != 'NodeA' 18:16:19 E - NodeC 18:16:19 E ? ^ 18:16:19 E + NodeA 18:16:19 E ? ^ 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:122: AssertionError 18:16:19 ___________ TransportPCETopologyTesting.test_03_getOpenRoadmNetwork ____________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_03_getOpenRoadmNetwork(self): 18:16:19 response = test_utils.get_ietf_network_request('openroadm-network', 'config') 18:16:19 self.assertEqual(response['status_code'], requests.codes.ok) 18:16:19 > self.assertEqual(response['network'][0]['node'][0]['node-id'], 'ROADMA01') 18:16:19 E AssertionError: 'XPDR-C2' != 'ROADMA01' 18:16:19 E - XPDR-C2 18:16:19 E + ROADMA01 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:128: AssertionError 18:16:19 ________ TransportPCETopologyTesting.test_04_getLinks_OpenroadmTopology ________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_04_getLinks_OpenroadmTopology(self): 18:16:19 response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 18:16:19 self.assertEqual(response['status_code'], requests.codes.ok) 18:16:19 # Tests related to links 18:16:19 > self.assertEqual(len(response['network'][0]['ietf-network-topology:link']), 10) 18:16:19 E KeyError: 'ietf-network-topology:link' 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:138: KeyError 18:16:19 ________ TransportPCETopologyTesting.test_05_getNodes_OpenRoadmTopology ________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_05_getNodes_OpenRoadmTopology(self): 18:16:19 response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 18:16:19 self.assertEqual(response['status_code'], requests.codes.ok) 18:16:19 > self.assertEqual(len(response['network'][0]['node']), 5) 18:16:19 E AssertionError: 8 != 5 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:163: AssertionError 18:16:19 ___________ TransportPCETopologyTesting.test_08_getOpenRoadmNetwork ____________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_08_getOpenRoadmNetwork(self): 18:16:19 # pylint: disable=redundant-unittest-assert 18:16:19 response = test_utils.get_ietf_network_request('openroadm-network', 'config') 18:16:19 self.assertEqual(response['status_code'], requests.codes.ok) 18:16:19 > self.assertEqual(len(response['network'][0]['node']), 3) 18:16:19 E AssertionError: 2 != 3 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:198: AssertionError 18:16:19 ________ TransportPCETopologyTesting.test_09_getNodes_OpenRoadmTopology ________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_09_getNodes_OpenRoadmTopology(self): 18:16:19 # pylint: disable=redundant-unittest-assert 18:16:19 response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 18:16:19 self.assertEqual(response['status_code'], requests.codes.ok) 18:16:19 self.assertEqual(len(response['network'][0]['node']), 6) 18:16:19 listNode = ['XPDRA01-XPDR1', 'ROADMA01-SRG1', 'ROADMA01-SRG3', 'ROADMA01-DEG1', 'ROADMA01-DEG2'] 18:16:19 for node in response['network'][0]['node']: 18:16:19 nodeId = node['node-id'] 18:16:19 nodeMapId = nodeId.split("-")[0] 18:16:19 if (nodeMapId == 'TAPI'): 18:16:19 continue 18:16:19 nodeType = node['org-openroadm-common-network:node-type'] 18:16:19 # Tests related to XPDRA nodes 18:16:19 if nodeId == 'XPDRA01-XPDR1': 18:16:19 self.assertIn({'network-ref': 'openroadm-network', 'node-ref': 'XPDRA01'}, node['supporting-node']) 18:16:19 self.assertIn({'network-ref': 'clli-network', 'node-ref': 'NodeA'}, node['supporting-node']) 18:16:19 self.assertEqual(nodeType, 'XPONDER') 18:16:19 client = 0 18:16:19 network = 0 18:16:19 for tp in node['ietf-network-topology:termination-point']: 18:16:19 tpType = tp['org-openroadm-common-network:tp-type'] 18:16:19 tpId = tp['tp-id'] 18:16:19 if tpType == 'XPONDER-CLIENT': 18:16:19 client += 1 18:16:19 elif tpType == 'XPONDER-NETWORK': 18:16:19 network += 1 18:16:19 if tpId == 'XPDR1-NETWORK2': 18:16:19 self.assertEqual( 18:16:19 tp['org-openroadm-common-network:associated-connection-map-tp'], ['XPDR1-CLIENT3']) 18:16:19 elif tpId == 'XPDR1-CLIENT3': 18:16:19 self.assertEqual( 18:16:19 tp['org-openroadm-common-network:associated-connection-map-tp'], ['XPDR1-NETWORK2']) 18:16:19 self.assertTrue(client == 4) 18:16:19 self.assertTrue(network == 2) 18:16:19 listNode.remove(nodeId) 18:16:19 # Tests related to ROADMA nodes 18:16:19 elif nodeId in self.CHECK_DICT1: 18:16:19 self.assertEqual(nodeType, self.CHECK_DICT1[nodeId]['node_type']) 18:16:19 if self.CHECK_DICT1[nodeId]['node_type'] == 'SRG': 18:16:19 self.assertEqual(len(node['ietf-network-topology:termination-point']), 17) 18:16:19 for tp in self.CHECK_DICT1[nodeId]['checks_tp']: 18:16:19 self.assertIn(tp, node['ietf-network-topology:termination-point']) 18:16:19 self.assertIn({'network-ref': 'clli-network', 'node-ref': 'NodeA'}, node['supporting-node']) 18:16:19 self.assertIn({'network-ref': 'openroadm-network', 'node-ref': 'ROADMA01'}, node['supporting-node']) 18:16:19 listNode.remove(nodeId) 18:16:19 else: 18:16:19 > self.assertFalse(True) 18:16:19 E AssertionError: True is not false 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:261: AssertionError 18:16:19 __________ TransportPCETopologyTesting.test_10_connect_tail_xpdr_rdm ___________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_10_connect_tail_xpdr_rdm(self): 18:16:19 # Connect the tail: XPDRA to ROADMA 18:16:19 response = test_utils.transportpce_api_rpc_request( 18:16:19 'transportpce-networkutils', 'init-xpdr-rdm-links', 18:16:19 {'links-input': {'xpdr-node': 'XPDRA01', 'xpdr-num': '1', 'network-num': '1', 18:16:19 'rdm-node': 'ROADMA01', 'srg-num': '1', 'termination-point-num': 'SRG1-PP1-TXRX'}}) 18:16:19 > self.assertEqual(response['status_code'], requests.codes.ok) 18:16:19 E AssertionError: 204 != 200 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:271: AssertionError 18:16:19 __________ TransportPCETopologyTesting.test_11_connect_tail_rdm_xpdr ___________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_11_connect_tail_rdm_xpdr(self): 18:16:19 # Connect the tail: ROADMA to XPDRA 18:16:19 response = test_utils.transportpce_api_rpc_request( 18:16:19 'transportpce-networkutils', 'init-rdm-xpdr-links', 18:16:19 {'links-input': {'xpdr-node': 'XPDRA01', 'xpdr-num': '1', 'network-num': '1', 18:16:19 'rdm-node': 'ROADMA01', 'srg-num': '1', 'termination-point-num': 'SRG1-PP1-TXRX'}}) 18:16:19 > self.assertEqual(response['status_code'], requests.codes.ok) 18:16:19 E AssertionError: 204 != 200 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:279: AssertionError 18:16:19 ________ TransportPCETopologyTesting.test_12_getLinks_OpenRoadmTopology ________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_12_getLinks_OpenRoadmTopology(self): 18:16:19 response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 18:16:19 self.assertEqual(response['status_code'], requests.codes.ok) 18:16:19 > self.assertEqual(len(response['network'][0]['ietf-network-topology:link']), 12) 18:16:19 E AssertionError: 10 != 12 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:284: AssertionError 18:16:19 ______________ TransportPCETopologyTesting.test_16_getClliNetwork ______________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_16_getClliNetwork(self): 18:16:19 response = test_utils.get_ietf_network_request('clli-network', 'config') 18:16:19 self.assertEqual(response['status_code'], requests.codes.ok) 18:16:19 listNode = ['NodeA', 'NodeC', 'TAPI-SBI-ABS-NODE'] 18:16:19 for node in response['network'][0]['node']: 18:16:19 nodeId = node['node-id'] 18:16:19 self.assertIn(nodeId, listNode) 18:16:19 self.assertEqual(node['org-openroadm-clli-network:clli'], nodeId) 18:16:19 listNode.remove(nodeId) 18:16:19 > self.assertEqual(len(listNode), 0) 18:16:19 E AssertionError: 1 != 0 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:349: AssertionError 18:16:19 ___________ TransportPCETopologyTesting.test_17_getOpenRoadmNetwork ____________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_17_getOpenRoadmNetwork(self): 18:16:19 response = test_utils.get_ietf_network_request('openroadm-network', 'config') 18:16:19 self.assertEqual(response['status_code'], requests.codes.ok) 18:16:19 > self.assertEqual(len(response['network'][0]['node']), 4) 18:16:19 E AssertionError: 2 != 4 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:354: AssertionError 18:16:19 ______ TransportPCETopologyTesting.test_18_getROADMLinkOpenRoadmTopology _______ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_18_getROADMLinkOpenRoadmTopology(self): 18:16:19 response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 18:16:19 self.assertEqual(response['status_code'], requests.codes.ok) 18:16:19 self.assertEqual(len(response['network'][0]['ietf-network-topology:link']), 20) 18:16:19 check_list = {'EXPRESS-LINK': ['ROADMA01-DEG2-DEG2-CTP-TXRXtoROADMA01-DEG1-DEG1-CTP-TXRX', 18:16:19 'ROADMA01-DEG1-DEG1-CTP-TXRXtoROADMA01-DEG2-DEG2-CTP-TXRX', 18:16:19 'ROADMC01-DEG2-DEG2-CTP-TXRXtoROADMC01-DEG1-DEG1-CTP-TXRX', 18:16:19 'ROADMC01-DEG1-DEG1-CTP-TXRXtoROADMC01-DEG2-DEG2-CTP-TXRX'], 18:16:19 'ADD-LINK': ['ROADMA01-SRG1-SRG1-CP-TXRXtoROADMA01-DEG2-DEG2-CTP-TXRX', 18:16:19 'ROADMA01-SRG1-SRG1-CP-TXRXtoROADMA01-DEG1-DEG1-CTP-TXRX', 18:16:19 'ROADMA01-SRG3-SRG3-CP-TXRXtoROADMA01-DEG2-DEG2-CTP-TXRX', 18:16:19 'ROADMA01-SRG3-SRG3-CP-TXRXtoROADMA01-DEG1-DEG1-CTP-TXRX', 18:16:19 'ROADMC01-SRG1-SRG1-CP-TXRXtoROADMC01-DEG2-DEG2-CTP-TXRX', 18:16:19 'ROADMC01-SRG1-SRG1-CP-TXRXtoROADMC01-DEG1-DEG1-CTP-TXRX'], 18:16:19 'DROP-LINK': ['ROADMA01-DEG1-DEG1-CTP-TXRXtoROADMA01-SRG1-SRG1-CP-TXRX', 18:16:19 'ROADMA01-DEG2-DEG2-CTP-TXRXtoROADMA01-SRG1-SRG1-CP-TXRX', 18:16:19 'ROADMA01-DEG1-DEG1-CTP-TXRXtoROADMA01-SRG3-SRG3-CP-TXRX', 18:16:19 'ROADMA01-DEG2-DEG2-CTP-TXRXtoROADMA01-SRG3-SRG3-CP-TXRX', 18:16:19 'ROADMC01-DEG1-DEG1-CTP-TXRXtoROADMC01-SRG1-SRG1-CP-TXRX', 18:16:19 'ROADMC01-DEG2-DEG2-CTP-TXRXtoROADMC01-SRG1-SRG1-CP-TXRX'], 18:16:19 'ROADM-TO-ROADM': ['ROADMA01-DEG1-DEG1-TTP-TXRXtoROADMC01-DEG2-DEG2-TTP-TXRX', 18:16:19 'ROADMC01-DEG2-DEG2-TTP-TXRXtoROADMA01-DEG1-DEG1-TTP-TXRX'], 18:16:19 'XPONDER-INPUT': ['ROADMA01-SRG1-SRG1-PP1-TXRXtoXPDRA01-XPDR1-XPDR1-NETWORK1'], 18:16:19 'XPONDER-OUTPUT': ['XPDRA01-XPDR1-XPDR1-NETWORK1toROADMA01-SRG1-SRG1-PP1-TXRX'] 18:16:19 } 18:16:19 for link in response['network'][0]['ietf-network-topology:link']: 18:16:19 linkId = link['link-id'] 18:16:19 linkType = link['org-openroadm-common-network:link-type'] 18:16:19 self.assertIn(linkType, check_list) 18:16:19 find = linkId in check_list[linkType] 18:16:19 > self.assertEqual(find, True) 18:16:19 E AssertionError: False != True 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:406: AssertionError 18:16:19 __ TransportPCETopologyTesting.test_19_getLinkOmsAttributesOpenRoadmTopology ___ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_19_getLinkOmsAttributesOpenRoadmTopology(self): 18:16:19 response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 18:16:19 self.assertEqual(response['status_code'], requests.codes.ok) 18:16:19 > self.assertEqual(len(response['network'][0]['ietf-network-topology:link']), 20) 18:16:19 E AssertionError: 18 != 20 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:414: AssertionError 18:16:19 ________ TransportPCETopologyTesting.test_20_getNodes_OpenRoadmTopology ________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_20_getNodes_OpenRoadmTopology(self): 18:16:19 # pylint: disable=redundant-unittest-assert 18:16:19 response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 18:16:19 self.assertEqual(response['status_code'], requests.codes.ok) 18:16:19 > self.assertEqual(len(response['network'][0]['node']), 9) 18:16:19 E AssertionError: 5 != 9 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:434: AssertionError 18:16:19 _______ TransportPCETopologyTesting.test_22_omsAttributes_ROADMA_ROADMB ________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 > sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 18:16:19 raise err 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 address = ('localhost', 8182), timeout = 10, source_address = None 18:16:19 socket_options = [(6, 1, 1)] 18:16:19 18:16:19 def create_connection( 18:16:19 address: tuple[str, int], 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 source_address: tuple[str, int] | None = None, 18:16:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 18:16:19 ) -> socket.socket: 18:16:19 """Connect to *address* and return the socket object. 18:16:19 18:16:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 18:16:19 port)``) and return the socket object. Passing the optional 18:16:19 *timeout* parameter will set the timeout on the socket instance 18:16:19 before attempting to connect. If no *timeout* is supplied, the 18:16:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 18:16:19 is used. If *source_address* is set it must be a tuple of (host, port) 18:16:19 for the socket to bind as a source address before making the connection. 18:16:19 An host of '' or port 0 tells the OS to use the default. 18:16:19 """ 18:16:19 18:16:19 host, port = address 18:16:19 if host.startswith("["): 18:16:19 host = host.strip("[]") 18:16:19 err = None 18:16:19 18:16:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 18:16:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 18:16:19 # The original create_connection function always returns all records. 18:16:19 family = allowed_gai_family() 18:16:19 18:16:19 try: 18:16:19 host.encode("idna") 18:16:19 except UnicodeError: 18:16:19 raise LocationParseError(f"'{host}', label empty or too long") from None 18:16:19 18:16:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 18:16:19 af, socktype, proto, canonname, sa = res 18:16:19 sock = None 18:16:19 try: 18:16:19 sock = socket.socket(af, socktype, proto) 18:16:19 18:16:19 # If provided, set socket level options before connecting. 18:16:19 _set_socket_options(sock, socket_options) 18:16:19 18:16:19 if timeout is not _DEFAULT_TIMEOUT: 18:16:19 sock.settimeout(timeout) 18:16:19 if source_address: 18:16:19 sock.bind(source_address) 18:16:19 > sock.connect(sa) 18:16:19 E ConnectionRefusedError: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 method = 'PUT' 18:16:19 url = '/rests/data/ietf-network:networks/network=openroadm-topology/ietf-network-topology:link=ROADMA01-DEG2-DEG2-TTP-TXRXtoROADMB01-DEG1-DEG1-TTP-TXRX/org-openroadm-network-topology:OMS-attributes/span' 18:16:19 body = '{"span": {"auto-spanloss": "true", "engineered-spanloss": 12.2, "link-concatenation": [{"SRLG-Id": 0, "fiber-type": "smf", "SRLG-length": 100000, "pmd": 0.5}]}}' 18:16:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '160', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 18:16:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 redirect = False, assert_same_host = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 18:16:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 18:16:19 decode_content = False, response_kw = {} 18:16:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/ietf-network:networks/network=openroadm-topology/i...2-TTP-TXRXtoROADMB01-DEG1-DEG1-TTP-TXRX/org-openroadm-network-topology:OMS-attributes/span', query=None, fragment=None) 18:16:19 destination_scheme = None, conn = None, release_this_conn = True 18:16:19 http_tunnel_required = False, err = None, clean_exit = False 18:16:19 18:16:19 def urlopen( # type: ignore[override] 18:16:19 self, 18:16:19 method: str, 18:16:19 url: str, 18:16:19 body: _TYPE_BODY | None = None, 18:16:19 headers: typing.Mapping[str, str] | None = None, 18:16:19 retries: Retry | bool | int | None = None, 18:16:19 redirect: bool = True, 18:16:19 assert_same_host: bool = True, 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 pool_timeout: int | None = None, 18:16:19 release_conn: bool | None = None, 18:16:19 chunked: bool = False, 18:16:19 body_pos: _TYPE_BODY_POSITION | None = None, 18:16:19 preload_content: bool = True, 18:16:19 decode_content: bool = True, 18:16:19 **response_kw: typing.Any, 18:16:19 ) -> BaseHTTPResponse: 18:16:19 """ 18:16:19 Get a connection from the pool and perform an HTTP request. This is the 18:16:19 lowest level call for making a request, so you'll need to specify all 18:16:19 the raw details. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 More commonly, it's appropriate to use a convenience method 18:16:19 such as :meth:`request`. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 `release_conn` will only behave as expected if 18:16:19 `preload_content=False` because we want to make 18:16:19 `preload_content=False` the default behaviour someday soon without 18:16:19 breaking backwards compatibility. 18:16:19 18:16:19 :param method: 18:16:19 HTTP request method (such as GET, POST, PUT, etc.) 18:16:19 18:16:19 :param url: 18:16:19 The URL to perform the request on. 18:16:19 18:16:19 :param body: 18:16:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 18:16:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 18:16:19 18:16:19 :param headers: 18:16:19 Dictionary of custom headers to send, such as User-Agent, 18:16:19 If-None-Match, etc. If None, pool headers are used. If provided, 18:16:19 these headers completely replace any pool-specific headers. 18:16:19 18:16:19 :param retries: 18:16:19 Configure the number of retries to allow before raising a 18:16:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 18:16:19 18:16:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 18:16:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 18:16:19 over different types of retries. 18:16:19 Pass an integer number to retry connection errors that many times, 18:16:19 but no other types of errors. Pass zero to never retry. 18:16:19 18:16:19 If ``False``, then retries are disabled and any exception is raised 18:16:19 immediately. Also, instead of raising a MaxRetryError on redirects, 18:16:19 the redirect response will be returned. 18:16:19 18:16:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 18:16:19 18:16:19 :param redirect: 18:16:19 If True, automatically handle redirects (status codes 301, 302, 18:16:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 18:16:19 will disable redirect, too. 18:16:19 18:16:19 :param assert_same_host: 18:16:19 If ``True``, will make sure that the host of the pool requests is 18:16:19 consistent else will raise HostChangedError. When ``False``, you can 18:16:19 use the pool on an HTTP proxy and request foreign hosts. 18:16:19 18:16:19 :param timeout: 18:16:19 If specified, overrides the default timeout for this one 18:16:19 request. It may be a float (in seconds) or an instance of 18:16:19 :class:`urllib3.util.Timeout`. 18:16:19 18:16:19 :param pool_timeout: 18:16:19 If set and the pool is set to block=True, then this method will 18:16:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 18:16:19 connection is available within the time period. 18:16:19 18:16:19 :param bool preload_content: 18:16:19 If True, the response's body will be preloaded into memory. 18:16:19 18:16:19 :param bool decode_content: 18:16:19 If True, will attempt to decode the body based on the 18:16:19 'content-encoding' header. 18:16:19 18:16:19 :param release_conn: 18:16:19 If False, then the urlopen call will not release the connection 18:16:19 back into the pool once a response is received (but will release if 18:16:19 you read the entire contents of the response such as when 18:16:19 `preload_content=True`). This is useful if you're not preloading 18:16:19 the response's content immediately. You will need to call 18:16:19 ``r.release_conn()`` on the response ``r`` to return the connection 18:16:19 back into the pool. If None, it takes the value of ``preload_content`` 18:16:19 which defaults to ``True``. 18:16:19 18:16:19 :param bool chunked: 18:16:19 If True, urllib3 will send the body using chunked transfer 18:16:19 encoding. Otherwise, urllib3 will send the body using the standard 18:16:19 content-length form. Defaults to False. 18:16:19 18:16:19 :param int body_pos: 18:16:19 Position to seek to in file-like body in the event of a retry or 18:16:19 redirect. Typically this won't need to be set because urllib3 will 18:16:19 auto-populate the value when needed. 18:16:19 """ 18:16:19 parsed_url = parse_url(url) 18:16:19 destination_scheme = parsed_url.scheme 18:16:19 18:16:19 if headers is None: 18:16:19 headers = self.headers 18:16:19 18:16:19 if not isinstance(retries, Retry): 18:16:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 18:16:19 18:16:19 if release_conn is None: 18:16:19 release_conn = preload_content 18:16:19 18:16:19 # Check host 18:16:19 if assert_same_host and not self.is_same_host(url): 18:16:19 raise HostChangedError(self, url, retries) 18:16:19 18:16:19 # Ensure that the URL we're connecting to is properly encoded 18:16:19 if url.startswith("/"): 18:16:19 url = to_str(_encode_target(url)) 18:16:19 else: 18:16:19 url = to_str(parsed_url.url) 18:16:19 18:16:19 conn = None 18:16:19 18:16:19 # Track whether `conn` needs to be released before 18:16:19 # returning/raising/recursing. Update this variable if necessary, and 18:16:19 # leave `release_conn` constant throughout the function. That way, if 18:16:19 # the function recurses, the original value of `release_conn` will be 18:16:19 # passed down into the recursive call, and its value will be respected. 18:16:19 # 18:16:19 # See issue #651 [1] for details. 18:16:19 # 18:16:19 # [1] 18:16:19 release_this_conn = release_conn 18:16:19 18:16:19 http_tunnel_required = connection_requires_http_tunnel( 18:16:19 self.proxy, self.proxy_config, destination_scheme 18:16:19 ) 18:16:19 18:16:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 18:16:19 # have to copy the headers dict so we can safely change it without those 18:16:19 # changes being reflected in anyone else's copy. 18:16:19 if not http_tunnel_required: 18:16:19 headers = headers.copy() # type: ignore[attr-defined] 18:16:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 18:16:19 18:16:19 # Must keep the exception bound to a separate variable or else Python 3 18:16:19 # complains about UnboundLocalError. 18:16:19 err = None 18:16:19 18:16:19 # Keep track of whether we cleanly exited the except block. This 18:16:19 # ensures we do proper cleanup in finally. 18:16:19 clean_exit = False 18:16:19 18:16:19 # Rewind body position, if needed. Record current position 18:16:19 # for future rewinds in the event of a redirect/retry. 18:16:19 body_pos = set_file_position(body, body_pos) 18:16:19 18:16:19 try: 18:16:19 # Request a connection from the queue. 18:16:19 timeout_obj = self._get_timeout(timeout) 18:16:19 conn = self._get_conn(timeout=pool_timeout) 18:16:19 18:16:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 18:16:19 18:16:19 # Is this a closed/new connection that requires CONNECT tunnelling? 18:16:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 18:16:19 try: 18:16:19 self._prepare_proxy(conn) 18:16:19 except (BaseSSLError, OSError, SocketTimeout) as e: 18:16:19 self._raise_timeout( 18:16:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 18:16:19 ) 18:16:19 raise 18:16:19 18:16:19 # If we're going to release the connection in ``finally:``, then 18:16:19 # the response doesn't need to know about the connection. Otherwise 18:16:19 # it will also try to release it and we'll have a double-release 18:16:19 # mess. 18:16:19 response_conn = conn if not release_conn else None 18:16:19 18:16:19 # Make the request on the HTTPConnection object 18:16:19 > response = self._make_request( 18:16:19 conn, 18:16:19 method, 18:16:19 url, 18:16:19 timeout=timeout_obj, 18:16:19 body=body, 18:16:19 headers=headers, 18:16:19 chunked=chunked, 18:16:19 retries=retries, 18:16:19 response_conn=response_conn, 18:16:19 preload_content=preload_content, 18:16:19 decode_content=decode_content, 18:16:19 **response_kw, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 18:16:19 conn.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 18:16:19 self.endheaders() 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 18:16:19 self._send_output(message_body, encode_chunked=encode_chunked) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 18:16:19 self.send(msg) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 18:16:19 self.connect() 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 18:16:19 self.sock = self._new_conn() 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 except socket.gaierror as e: 18:16:19 raise NameResolutionError(self.host, self, e) from e 18:16:19 except SocketTimeout as e: 18:16:19 raise ConnectTimeoutError( 18:16:19 self, 18:16:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 18:16:19 ) from e 18:16:19 18:16:19 except OSError as e: 18:16:19 > raise NewConnectionError( 18:16:19 self, f"Failed to establish a new connection: {e}" 18:16:19 ) from e 18:16:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 > resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 18:16:19 retries = retries.increment( 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 method = 'PUT' 18:16:19 url = '/rests/data/ietf-network:networks/network=openroadm-topology/ietf-network-topology:link=ROADMA01-DEG2-DEG2-TTP-TXRXtoROADMB01-DEG1-DEG1-TTP-TXRX/org-openroadm-network-topology:OMS-attributes/span' 18:16:19 response = None 18:16:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 18:16:19 _pool = 18:16:19 _stacktrace = 18:16:19 18:16:19 def increment( 18:16:19 self, 18:16:19 method: str | None = None, 18:16:19 url: str | None = None, 18:16:19 response: BaseHTTPResponse | None = None, 18:16:19 error: Exception | None = None, 18:16:19 _pool: ConnectionPool | None = None, 18:16:19 _stacktrace: TracebackType | None = None, 18:16:19 ) -> Self: 18:16:19 """Return a new Retry object with incremented retry counters. 18:16:19 18:16:19 :param response: A response object, or None, if the server did not 18:16:19 return a response. 18:16:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 18:16:19 :param Exception error: An error encountered during the request, or 18:16:19 None if the response was received successfully. 18:16:19 18:16:19 :return: A new ``Retry`` object. 18:16:19 """ 18:16:19 if self.total is False and error: 18:16:19 # Disabled, indicate to re-raise the error. 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 18:16:19 total = self.total 18:16:19 if total is not None: 18:16:19 total -= 1 18:16:19 18:16:19 connect = self.connect 18:16:19 read = self.read 18:16:19 redirect = self.redirect 18:16:19 status_count = self.status 18:16:19 other = self.other 18:16:19 cause = "unknown" 18:16:19 status = None 18:16:19 redirect_location = None 18:16:19 18:16:19 if error and self._is_connection_error(error): 18:16:19 # Connect retry? 18:16:19 if connect is False: 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif connect is not None: 18:16:19 connect -= 1 18:16:19 18:16:19 elif error and self._is_read_error(error): 18:16:19 # Read retry? 18:16:19 if read is False or method is None or not self._is_method_retryable(method): 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif read is not None: 18:16:19 read -= 1 18:16:19 18:16:19 elif error: 18:16:19 # Other retry? 18:16:19 if other is not None: 18:16:19 other -= 1 18:16:19 18:16:19 elif response and response.get_redirect_location(): 18:16:19 # Redirect retry? 18:16:19 if redirect is not None: 18:16:19 redirect -= 1 18:16:19 cause = "too many redirects" 18:16:19 response_redirect_location = response.get_redirect_location() 18:16:19 if response_redirect_location: 18:16:19 redirect_location = response_redirect_location 18:16:19 status = response.status 18:16:19 18:16:19 else: 18:16:19 # Incrementing because of a server error like a 500 in 18:16:19 # status_forcelist and the given method is in the allowed_methods 18:16:19 cause = ResponseError.GENERIC_ERROR 18:16:19 if response and response.status: 18:16:19 if status_count is not None: 18:16:19 status_count -= 1 18:16:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 18:16:19 status = response.status 18:16:19 18:16:19 history = self.history + ( 18:16:19 RequestHistory(method, url, error, status, redirect_location), 18:16:19 ) 18:16:19 18:16:19 new_retry = self.new( 18:16:19 total=total, 18:16:19 connect=connect, 18:16:19 read=read, 18:16:19 redirect=redirect, 18:16:19 status=status_count, 18:16:19 other=other, 18:16:19 history=history, 18:16:19 ) 18:16:19 18:16:19 if new_retry.is_exhausted(): 18:16:19 reason = error or ResponseError(cause) 18:16:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 18:16:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-topology/ietf-network-topology:link=ROADMA01-DEG2-DEG2-TTP-TXRXtoROADMB01-DEG1-DEG1-TTP-TXRX/org-openroadm-network-topology:OMS-attributes/span (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 18:16:19 18:16:19 During handling of the above exception, another exception occurred: 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_22_omsAttributes_ROADMA_ROADMB(self): 18:16:19 # Config ROADMA01-ROADMB01 oms-attributes 18:16:19 data = {"span": { 18:16:19 "auto-spanloss": "true", 18:16:19 "engineered-spanloss": 12.2, 18:16:19 "link-concatenation": [{ 18:16:19 "SRLG-Id": 0, 18:16:19 "fiber-type": "smf", 18:16:19 "SRLG-length": 100000, 18:16:19 "pmd": 0.5}]}} 18:16:19 > response = test_utils.add_oms_attr_request( 18:16:19 "ROADMA01-DEG2-DEG2-TTP-TXRXtoROADMB01-DEG1-DEG1-TTP-TXRX", data) 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:500: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 transportpce_tests/common/test_utils.py:558: in add_oms_attr_request 18:16:19 response = put_request(url2.format('{}', network, link), oms_attr) 18:16:19 transportpce_tests/common/test_utils.py:124: in put_request 18:16:19 return requests.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 18:16:19 return session.request(method=method, url=url, **kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 18:16:19 resp = self.send(prep, **send_kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 18:16:19 r = adapter.send(request, **kwargs) 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 except (ProtocolError, OSError) as err: 18:16:19 raise ConnectionError(err, request=request) 18:16:19 18:16:19 except MaxRetryError as e: 18:16:19 if isinstance(e.reason, ConnectTimeoutError): 18:16:19 # TODO: Remove this in 3.0.0: see #2811 18:16:19 if not isinstance(e.reason, NewConnectionError): 18:16:19 raise ConnectTimeout(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, ResponseError): 18:16:19 raise RetryError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _ProxyError): 18:16:19 raise ProxyError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _SSLError): 18:16:19 # This branch is for urllib3 v1.22 and later. 18:16:19 raise SSLError(e, request=request) 18:16:19 18:16:19 > raise ConnectionError(e, request=request) 18:16:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-topology/ietf-network-topology:link=ROADMA01-DEG2-DEG2-TTP-TXRXtoROADMB01-DEG1-DEG1-TTP-TXRX/org-openroadm-network-topology:OMS-attributes/span (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 18:16:19 _______ TransportPCETopologyTesting.test_23_omsAttributes_ROADMB_ROADMA ________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 > sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 18:16:19 raise err 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 address = ('localhost', 8182), timeout = 10, source_address = None 18:16:19 socket_options = [(6, 1, 1)] 18:16:19 18:16:19 def create_connection( 18:16:19 address: tuple[str, int], 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 source_address: tuple[str, int] | None = None, 18:16:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 18:16:19 ) -> socket.socket: 18:16:19 """Connect to *address* and return the socket object. 18:16:19 18:16:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 18:16:19 port)``) and return the socket object. Passing the optional 18:16:19 *timeout* parameter will set the timeout on the socket instance 18:16:19 before attempting to connect. If no *timeout* is supplied, the 18:16:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 18:16:19 is used. If *source_address* is set it must be a tuple of (host, port) 18:16:19 for the socket to bind as a source address before making the connection. 18:16:19 An host of '' or port 0 tells the OS to use the default. 18:16:19 """ 18:16:19 18:16:19 host, port = address 18:16:19 if host.startswith("["): 18:16:19 host = host.strip("[]") 18:16:19 err = None 18:16:19 18:16:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 18:16:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 18:16:19 # The original create_connection function always returns all records. 18:16:19 family = allowed_gai_family() 18:16:19 18:16:19 try: 18:16:19 host.encode("idna") 18:16:19 except UnicodeError: 18:16:19 raise LocationParseError(f"'{host}', label empty or too long") from None 18:16:19 18:16:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 18:16:19 af, socktype, proto, canonname, sa = res 18:16:19 sock = None 18:16:19 try: 18:16:19 sock = socket.socket(af, socktype, proto) 18:16:19 18:16:19 # If provided, set socket level options before connecting. 18:16:19 _set_socket_options(sock, socket_options) 18:16:19 18:16:19 if timeout is not _DEFAULT_TIMEOUT: 18:16:19 sock.settimeout(timeout) 18:16:19 if source_address: 18:16:19 sock.bind(source_address) 18:16:19 > sock.connect(sa) 18:16:19 E ConnectionRefusedError: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 method = 'PUT' 18:16:19 url = '/rests/data/ietf-network:networks/network=openroadm-topology/ietf-network-topology:link=ROADMB01-DEG1-DEG1-TTP-TXRXtoROADMA01-DEG2-DEG2-TTP-TXRX/org-openroadm-network-topology:OMS-attributes/span' 18:16:19 body = '{"span": {"auto-spanloss": "true", "engineered-spanloss": 12.2, "link-concatenation": [{"SRLG-Id": 0, "fiber-type": "smf", "SRLG-length": 100000, "pmd": 0.5}]}}' 18:16:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '160', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 18:16:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 redirect = False, assert_same_host = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 18:16:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 18:16:19 decode_content = False, response_kw = {} 18:16:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/ietf-network:networks/network=openroadm-topology/i...1-TTP-TXRXtoROADMA01-DEG2-DEG2-TTP-TXRX/org-openroadm-network-topology:OMS-attributes/span', query=None, fragment=None) 18:16:19 destination_scheme = None, conn = None, release_this_conn = True 18:16:19 http_tunnel_required = False, err = None, clean_exit = False 18:16:19 18:16:19 def urlopen( # type: ignore[override] 18:16:19 self, 18:16:19 method: str, 18:16:19 url: str, 18:16:19 body: _TYPE_BODY | None = None, 18:16:19 headers: typing.Mapping[str, str] | None = None, 18:16:19 retries: Retry | bool | int | None = None, 18:16:19 redirect: bool = True, 18:16:19 assert_same_host: bool = True, 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 pool_timeout: int | None = None, 18:16:19 release_conn: bool | None = None, 18:16:19 chunked: bool = False, 18:16:19 body_pos: _TYPE_BODY_POSITION | None = None, 18:16:19 preload_content: bool = True, 18:16:19 decode_content: bool = True, 18:16:19 **response_kw: typing.Any, 18:16:19 ) -> BaseHTTPResponse: 18:16:19 """ 18:16:19 Get a connection from the pool and perform an HTTP request. This is the 18:16:19 lowest level call for making a request, so you'll need to specify all 18:16:19 the raw details. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 More commonly, it's appropriate to use a convenience method 18:16:19 such as :meth:`request`. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 `release_conn` will only behave as expected if 18:16:19 `preload_content=False` because we want to make 18:16:19 `preload_content=False` the default behaviour someday soon without 18:16:19 breaking backwards compatibility. 18:16:19 18:16:19 :param method: 18:16:19 HTTP request method (such as GET, POST, PUT, etc.) 18:16:19 18:16:19 :param url: 18:16:19 The URL to perform the request on. 18:16:19 18:16:19 :param body: 18:16:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 18:16:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 18:16:19 18:16:19 :param headers: 18:16:19 Dictionary of custom headers to send, such as User-Agent, 18:16:19 If-None-Match, etc. If None, pool headers are used. If provided, 18:16:19 these headers completely replace any pool-specific headers. 18:16:19 18:16:19 :param retries: 18:16:19 Configure the number of retries to allow before raising a 18:16:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 18:16:19 18:16:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 18:16:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 18:16:19 over different types of retries. 18:16:19 Pass an integer number to retry connection errors that many times, 18:16:19 but no other types of errors. Pass zero to never retry. 18:16:19 18:16:19 If ``False``, then retries are disabled and any exception is raised 18:16:19 immediately. Also, instead of raising a MaxRetryError on redirects, 18:16:19 the redirect response will be returned. 18:16:19 18:16:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 18:16:19 18:16:19 :param redirect: 18:16:19 If True, automatically handle redirects (status codes 301, 302, 18:16:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 18:16:19 will disable redirect, too. 18:16:19 18:16:19 :param assert_same_host: 18:16:19 If ``True``, will make sure that the host of the pool requests is 18:16:19 consistent else will raise HostChangedError. When ``False``, you can 18:16:19 use the pool on an HTTP proxy and request foreign hosts. 18:16:19 18:16:19 :param timeout: 18:16:19 If specified, overrides the default timeout for this one 18:16:19 request. It may be a float (in seconds) or an instance of 18:16:19 :class:`urllib3.util.Timeout`. 18:16:19 18:16:19 :param pool_timeout: 18:16:19 If set and the pool is set to block=True, then this method will 18:16:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 18:16:19 connection is available within the time period. 18:16:19 18:16:19 :param bool preload_content: 18:16:19 If True, the response's body will be preloaded into memory. 18:16:19 18:16:19 :param bool decode_content: 18:16:19 If True, will attempt to decode the body based on the 18:16:19 'content-encoding' header. 18:16:19 18:16:19 :param release_conn: 18:16:19 If False, then the urlopen call will not release the connection 18:16:19 back into the pool once a response is received (but will release if 18:16:19 you read the entire contents of the response such as when 18:16:19 `preload_content=True`). This is useful if you're not preloading 18:16:19 the response's content immediately. You will need to call 18:16:19 ``r.release_conn()`` on the response ``r`` to return the connection 18:16:19 back into the pool. If None, it takes the value of ``preload_content`` 18:16:19 which defaults to ``True``. 18:16:19 18:16:19 :param bool chunked: 18:16:19 If True, urllib3 will send the body using chunked transfer 18:16:19 encoding. Otherwise, urllib3 will send the body using the standard 18:16:19 content-length form. Defaults to False. 18:16:19 18:16:19 :param int body_pos: 18:16:19 Position to seek to in file-like body in the event of a retry or 18:16:19 redirect. Typically this won't need to be set because urllib3 will 18:16:19 auto-populate the value when needed. 18:16:19 """ 18:16:19 parsed_url = parse_url(url) 18:16:19 destination_scheme = parsed_url.scheme 18:16:19 18:16:19 if headers is None: 18:16:19 headers = self.headers 18:16:19 18:16:19 if not isinstance(retries, Retry): 18:16:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 18:16:19 18:16:19 if release_conn is None: 18:16:19 release_conn = preload_content 18:16:19 18:16:19 # Check host 18:16:19 if assert_same_host and not self.is_same_host(url): 18:16:19 raise HostChangedError(self, url, retries) 18:16:19 18:16:19 # Ensure that the URL we're connecting to is properly encoded 18:16:19 if url.startswith("/"): 18:16:19 url = to_str(_encode_target(url)) 18:16:19 else: 18:16:19 url = to_str(parsed_url.url) 18:16:19 18:16:19 conn = None 18:16:19 18:16:19 # Track whether `conn` needs to be released before 18:16:19 # returning/raising/recursing. Update this variable if necessary, and 18:16:19 # leave `release_conn` constant throughout the function. That way, if 18:16:19 # the function recurses, the original value of `release_conn` will be 18:16:19 # passed down into the recursive call, and its value will be respected. 18:16:19 # 18:16:19 # See issue #651 [1] for details. 18:16:19 # 18:16:19 # [1] 18:16:19 release_this_conn = release_conn 18:16:19 18:16:19 http_tunnel_required = connection_requires_http_tunnel( 18:16:19 self.proxy, self.proxy_config, destination_scheme 18:16:19 ) 18:16:19 18:16:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 18:16:19 # have to copy the headers dict so we can safely change it without those 18:16:19 # changes being reflected in anyone else's copy. 18:16:19 if not http_tunnel_required: 18:16:19 headers = headers.copy() # type: ignore[attr-defined] 18:16:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 18:16:19 18:16:19 # Must keep the exception bound to a separate variable or else Python 3 18:16:19 # complains about UnboundLocalError. 18:16:19 err = None 18:16:19 18:16:19 # Keep track of whether we cleanly exited the except block. This 18:16:19 # ensures we do proper cleanup in finally. 18:16:19 clean_exit = False 18:16:19 18:16:19 # Rewind body position, if needed. Record current position 18:16:19 # for future rewinds in the event of a redirect/retry. 18:16:19 body_pos = set_file_position(body, body_pos) 18:16:19 18:16:19 try: 18:16:19 # Request a connection from the queue. 18:16:19 timeout_obj = self._get_timeout(timeout) 18:16:19 conn = self._get_conn(timeout=pool_timeout) 18:16:19 18:16:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 18:16:19 18:16:19 # Is this a closed/new connection that requires CONNECT tunnelling? 18:16:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 18:16:19 try: 18:16:19 self._prepare_proxy(conn) 18:16:19 except (BaseSSLError, OSError, SocketTimeout) as e: 18:16:19 self._raise_timeout( 18:16:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 18:16:19 ) 18:16:19 raise 18:16:19 18:16:19 # If we're going to release the connection in ``finally:``, then 18:16:19 # the response doesn't need to know about the connection. Otherwise 18:16:19 # it will also try to release it and we'll have a double-release 18:16:19 # mess. 18:16:19 response_conn = conn if not release_conn else None 18:16:19 18:16:19 # Make the request on the HTTPConnection object 18:16:19 > response = self._make_request( 18:16:19 conn, 18:16:19 method, 18:16:19 url, 18:16:19 timeout=timeout_obj, 18:16:19 body=body, 18:16:19 headers=headers, 18:16:19 chunked=chunked, 18:16:19 retries=retries, 18:16:19 response_conn=response_conn, 18:16:19 preload_content=preload_content, 18:16:19 decode_content=decode_content, 18:16:19 **response_kw, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 18:16:19 conn.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 18:16:19 self.endheaders() 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 18:16:19 self._send_output(message_body, encode_chunked=encode_chunked) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 18:16:19 self.send(msg) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 18:16:19 self.connect() 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 18:16:19 self.sock = self._new_conn() 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 except socket.gaierror as e: 18:16:19 raise NameResolutionError(self.host, self, e) from e 18:16:19 except SocketTimeout as e: 18:16:19 raise ConnectTimeoutError( 18:16:19 self, 18:16:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 18:16:19 ) from e 18:16:19 18:16:19 except OSError as e: 18:16:19 > raise NewConnectionError( 18:16:19 self, f"Failed to establish a new connection: {e}" 18:16:19 ) from e 18:16:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 > resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 18:16:19 retries = retries.increment( 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 method = 'PUT' 18:16:19 url = '/rests/data/ietf-network:networks/network=openroadm-topology/ietf-network-topology:link=ROADMB01-DEG1-DEG1-TTP-TXRXtoROADMA01-DEG2-DEG2-TTP-TXRX/org-openroadm-network-topology:OMS-attributes/span' 18:16:19 response = None 18:16:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 18:16:19 _pool = 18:16:19 _stacktrace = 18:16:19 18:16:19 def increment( 18:16:19 self, 18:16:19 method: str | None = None, 18:16:19 url: str | None = None, 18:16:19 response: BaseHTTPResponse | None = None, 18:16:19 error: Exception | None = None, 18:16:19 _pool: ConnectionPool | None = None, 18:16:19 _stacktrace: TracebackType | None = None, 18:16:19 ) -> Self: 18:16:19 """Return a new Retry object with incremented retry counters. 18:16:19 18:16:19 :param response: A response object, or None, if the server did not 18:16:19 return a response. 18:16:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 18:16:19 :param Exception error: An error encountered during the request, or 18:16:19 None if the response was received successfully. 18:16:19 18:16:19 :return: A new ``Retry`` object. 18:16:19 """ 18:16:19 if self.total is False and error: 18:16:19 # Disabled, indicate to re-raise the error. 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 18:16:19 total = self.total 18:16:19 if total is not None: 18:16:19 total -= 1 18:16:19 18:16:19 connect = self.connect 18:16:19 read = self.read 18:16:19 redirect = self.redirect 18:16:19 status_count = self.status 18:16:19 other = self.other 18:16:19 cause = "unknown" 18:16:19 status = None 18:16:19 redirect_location = None 18:16:19 18:16:19 if error and self._is_connection_error(error): 18:16:19 # Connect retry? 18:16:19 if connect is False: 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif connect is not None: 18:16:19 connect -= 1 18:16:19 18:16:19 elif error and self._is_read_error(error): 18:16:19 # Read retry? 18:16:19 if read is False or method is None or not self._is_method_retryable(method): 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif read is not None: 18:16:19 read -= 1 18:16:19 18:16:19 elif error: 18:16:19 # Other retry? 18:16:19 if other is not None: 18:16:19 other -= 1 18:16:19 18:16:19 elif response and response.get_redirect_location(): 18:16:19 # Redirect retry? 18:16:19 if redirect is not None: 18:16:19 redirect -= 1 18:16:19 cause = "too many redirects" 18:16:19 response_redirect_location = response.get_redirect_location() 18:16:19 if response_redirect_location: 18:16:19 redirect_location = response_redirect_location 18:16:19 status = response.status 18:16:19 18:16:19 else: 18:16:19 # Incrementing because of a server error like a 500 in 18:16:19 # status_forcelist and the given method is in the allowed_methods 18:16:19 cause = ResponseError.GENERIC_ERROR 18:16:19 if response and response.status: 18:16:19 if status_count is not None: 18:16:19 status_count -= 1 18:16:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 18:16:19 status = response.status 18:16:19 18:16:19 history = self.history + ( 18:16:19 RequestHistory(method, url, error, status, redirect_location), 18:16:19 ) 18:16:19 18:16:19 new_retry = self.new( 18:16:19 total=total, 18:16:19 connect=connect, 18:16:19 read=read, 18:16:19 redirect=redirect, 18:16:19 status=status_count, 18:16:19 other=other, 18:16:19 history=history, 18:16:19 ) 18:16:19 18:16:19 if new_retry.is_exhausted(): 18:16:19 reason = error or ResponseError(cause) 18:16:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 18:16:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-topology/ietf-network-topology:link=ROADMB01-DEG1-DEG1-TTP-TXRXtoROADMA01-DEG2-DEG2-TTP-TXRX/org-openroadm-network-topology:OMS-attributes/span (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 18:16:19 18:16:19 During handling of the above exception, another exception occurred: 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_23_omsAttributes_ROADMB_ROADMA(self): 18:16:19 # Config ROADMB01-ROADMA01 oms-attributes 18:16:19 data = {"span": { 18:16:19 "auto-spanloss": "true", 18:16:19 "engineered-spanloss": 12.2, 18:16:19 "link-concatenation": [{ 18:16:19 "SRLG-Id": 0, 18:16:19 "fiber-type": "smf", 18:16:19 "SRLG-length": 100000, 18:16:19 "pmd": 0.5}]}} 18:16:19 > response = test_utils.add_oms_attr_request( 18:16:19 "ROADMB01-DEG1-DEG1-TTP-TXRXtoROADMA01-DEG2-DEG2-TTP-TXRX", data) 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:514: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 transportpce_tests/common/test_utils.py:558: in add_oms_attr_request 18:16:19 response = put_request(url2.format('{}', network, link), oms_attr) 18:16:19 transportpce_tests/common/test_utils.py:124: in put_request 18:16:19 return requests.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 18:16:19 return session.request(method=method, url=url, **kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 18:16:19 resp = self.send(prep, **send_kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 18:16:19 r = adapter.send(request, **kwargs) 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 except (ProtocolError, OSError) as err: 18:16:19 raise ConnectionError(err, request=request) 18:16:19 18:16:19 except MaxRetryError as e: 18:16:19 if isinstance(e.reason, ConnectTimeoutError): 18:16:19 # TODO: Remove this in 3.0.0: see #2811 18:16:19 if not isinstance(e.reason, NewConnectionError): 18:16:19 raise ConnectTimeout(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, ResponseError): 18:16:19 raise RetryError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _ProxyError): 18:16:19 raise ProxyError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _SSLError): 18:16:19 # This branch is for urllib3 v1.22 and later. 18:16:19 raise SSLError(e, request=request) 18:16:19 18:16:19 > raise ConnectionError(e, request=request) 18:16:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-topology/ietf-network-topology:link=ROADMB01-DEG1-DEG1-TTP-TXRXtoROADMA01-DEG2-DEG2-TTP-TXRX/org-openroadm-network-topology:OMS-attributes/span (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 18:16:19 _______ TransportPCETopologyTesting.test_24_omsAttributes_ROADMB_ROADMC ________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 > sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 18:16:19 raise err 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 address = ('localhost', 8182), timeout = 10, source_address = None 18:16:19 socket_options = [(6, 1, 1)] 18:16:19 18:16:19 def create_connection( 18:16:19 address: tuple[str, int], 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 source_address: tuple[str, int] | None = None, 18:16:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 18:16:19 ) -> socket.socket: 18:16:19 """Connect to *address* and return the socket object. 18:16:19 18:16:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 18:16:19 port)``) and return the socket object. Passing the optional 18:16:19 *timeout* parameter will set the timeout on the socket instance 18:16:19 before attempting to connect. If no *timeout* is supplied, the 18:16:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 18:16:19 is used. If *source_address* is set it must be a tuple of (host, port) 18:16:19 for the socket to bind as a source address before making the connection. 18:16:19 An host of '' or port 0 tells the OS to use the default. 18:16:19 """ 18:16:19 18:16:19 host, port = address 18:16:19 if host.startswith("["): 18:16:19 host = host.strip("[]") 18:16:19 err = None 18:16:19 18:16:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 18:16:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 18:16:19 # The original create_connection function always returns all records. 18:16:19 family = allowed_gai_family() 18:16:19 18:16:19 try: 18:16:19 host.encode("idna") 18:16:19 except UnicodeError: 18:16:19 raise LocationParseError(f"'{host}', label empty or too long") from None 18:16:19 18:16:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 18:16:19 af, socktype, proto, canonname, sa = res 18:16:19 sock = None 18:16:19 try: 18:16:19 sock = socket.socket(af, socktype, proto) 18:16:19 18:16:19 # If provided, set socket level options before connecting. 18:16:19 _set_socket_options(sock, socket_options) 18:16:19 18:16:19 if timeout is not _DEFAULT_TIMEOUT: 18:16:19 sock.settimeout(timeout) 18:16:19 if source_address: 18:16:19 sock.bind(source_address) 18:16:19 > sock.connect(sa) 18:16:19 E ConnectionRefusedError: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 method = 'PUT' 18:16:19 url = '/rests/data/ietf-network:networks/network=openroadm-topology/ietf-network-topology:link=ROADMB01-DEG2-DEG2-TTP-TXRXtoROADMC01-DEG1-DEG1-TTP-TXRX/org-openroadm-network-topology:OMS-attributes/span' 18:16:19 body = '{"span": {"auto-spanloss": "true", "engineered-spanloss": 12.2, "link-concatenation": [{"SRLG-Id": 0, "fiber-type": "smf", "SRLG-length": 100000, "pmd": 0.5}]}}' 18:16:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '160', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 18:16:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 redirect = False, assert_same_host = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 18:16:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 18:16:19 decode_content = False, response_kw = {} 18:16:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/ietf-network:networks/network=openroadm-topology/i...2-TTP-TXRXtoROADMC01-DEG1-DEG1-TTP-TXRX/org-openroadm-network-topology:OMS-attributes/span', query=None, fragment=None) 18:16:19 destination_scheme = None, conn = None, release_this_conn = True 18:16:19 http_tunnel_required = False, err = None, clean_exit = False 18:16:19 18:16:19 def urlopen( # type: ignore[override] 18:16:19 self, 18:16:19 method: str, 18:16:19 url: str, 18:16:19 body: _TYPE_BODY | None = None, 18:16:19 headers: typing.Mapping[str, str] | None = None, 18:16:19 retries: Retry | bool | int | None = None, 18:16:19 redirect: bool = True, 18:16:19 assert_same_host: bool = True, 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 pool_timeout: int | None = None, 18:16:19 release_conn: bool | None = None, 18:16:19 chunked: bool = False, 18:16:19 body_pos: _TYPE_BODY_POSITION | None = None, 18:16:19 preload_content: bool = True, 18:16:19 decode_content: bool = True, 18:16:19 **response_kw: typing.Any, 18:16:19 ) -> BaseHTTPResponse: 18:16:19 """ 18:16:19 Get a connection from the pool and perform an HTTP request. This is the 18:16:19 lowest level call for making a request, so you'll need to specify all 18:16:19 the raw details. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 More commonly, it's appropriate to use a convenience method 18:16:19 such as :meth:`request`. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 `release_conn` will only behave as expected if 18:16:19 `preload_content=False` because we want to make 18:16:19 `preload_content=False` the default behaviour someday soon without 18:16:19 breaking backwards compatibility. 18:16:19 18:16:19 :param method: 18:16:19 HTTP request method (such as GET, POST, PUT, etc.) 18:16:19 18:16:19 :param url: 18:16:19 The URL to perform the request on. 18:16:19 18:16:19 :param body: 18:16:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 18:16:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 18:16:19 18:16:19 :param headers: 18:16:19 Dictionary of custom headers to send, such as User-Agent, 18:16:19 If-None-Match, etc. If None, pool headers are used. If provided, 18:16:19 these headers completely replace any pool-specific headers. 18:16:19 18:16:19 :param retries: 18:16:19 Configure the number of retries to allow before raising a 18:16:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 18:16:19 18:16:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 18:16:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 18:16:19 over different types of retries. 18:16:19 Pass an integer number to retry connection errors that many times, 18:16:19 but no other types of errors. Pass zero to never retry. 18:16:19 18:16:19 If ``False``, then retries are disabled and any exception is raised 18:16:19 immediately. Also, instead of raising a MaxRetryError on redirects, 18:16:19 the redirect response will be returned. 18:16:19 18:16:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 18:16:19 18:16:19 :param redirect: 18:16:19 If True, automatically handle redirects (status codes 301, 302, 18:16:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 18:16:19 will disable redirect, too. 18:16:19 18:16:19 :param assert_same_host: 18:16:19 If ``True``, will make sure that the host of the pool requests is 18:16:19 consistent else will raise HostChangedError. When ``False``, you can 18:16:19 use the pool on an HTTP proxy and request foreign hosts. 18:16:19 18:16:19 :param timeout: 18:16:19 If specified, overrides the default timeout for this one 18:16:19 request. It may be a float (in seconds) or an instance of 18:16:19 :class:`urllib3.util.Timeout`. 18:16:19 18:16:19 :param pool_timeout: 18:16:19 If set and the pool is set to block=True, then this method will 18:16:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 18:16:19 connection is available within the time period. 18:16:19 18:16:19 :param bool preload_content: 18:16:19 If True, the response's body will be preloaded into memory. 18:16:19 18:16:19 :param bool decode_content: 18:16:19 If True, will attempt to decode the body based on the 18:16:19 'content-encoding' header. 18:16:19 18:16:19 :param release_conn: 18:16:19 If False, then the urlopen call will not release the connection 18:16:19 back into the pool once a response is received (but will release if 18:16:19 you read the entire contents of the response such as when 18:16:19 `preload_content=True`). This is useful if you're not preloading 18:16:19 the response's content immediately. You will need to call 18:16:19 ``r.release_conn()`` on the response ``r`` to return the connection 18:16:19 back into the pool. If None, it takes the value of ``preload_content`` 18:16:19 which defaults to ``True``. 18:16:19 18:16:19 :param bool chunked: 18:16:19 If True, urllib3 will send the body using chunked transfer 18:16:19 encoding. Otherwise, urllib3 will send the body using the standard 18:16:19 content-length form. Defaults to False. 18:16:19 18:16:19 :param int body_pos: 18:16:19 Position to seek to in file-like body in the event of a retry or 18:16:19 redirect. Typically this won't need to be set because urllib3 will 18:16:19 auto-populate the value when needed. 18:16:19 """ 18:16:19 parsed_url = parse_url(url) 18:16:19 destination_scheme = parsed_url.scheme 18:16:19 18:16:19 if headers is None: 18:16:19 headers = self.headers 18:16:19 18:16:19 if not isinstance(retries, Retry): 18:16:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 18:16:19 18:16:19 if release_conn is None: 18:16:19 release_conn = preload_content 18:16:19 18:16:19 # Check host 18:16:19 if assert_same_host and not self.is_same_host(url): 18:16:19 raise HostChangedError(self, url, retries) 18:16:19 18:16:19 # Ensure that the URL we're connecting to is properly encoded 18:16:19 if url.startswith("/"): 18:16:19 url = to_str(_encode_target(url)) 18:16:19 else: 18:16:19 url = to_str(parsed_url.url) 18:16:19 18:16:19 conn = None 18:16:19 18:16:19 # Track whether `conn` needs to be released before 18:16:19 # returning/raising/recursing. Update this variable if necessary, and 18:16:19 # leave `release_conn` constant throughout the function. That way, if 18:16:19 # the function recurses, the original value of `release_conn` will be 18:16:19 # passed down into the recursive call, and its value will be respected. 18:16:19 # 18:16:19 # See issue #651 [1] for details. 18:16:19 # 18:16:19 # [1] 18:16:19 release_this_conn = release_conn 18:16:19 18:16:19 http_tunnel_required = connection_requires_http_tunnel( 18:16:19 self.proxy, self.proxy_config, destination_scheme 18:16:19 ) 18:16:19 18:16:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 18:16:19 # have to copy the headers dict so we can safely change it without those 18:16:19 # changes being reflected in anyone else's copy. 18:16:19 if not http_tunnel_required: 18:16:19 headers = headers.copy() # type: ignore[attr-defined] 18:16:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 18:16:19 18:16:19 # Must keep the exception bound to a separate variable or else Python 3 18:16:19 # complains about UnboundLocalError. 18:16:19 err = None 18:16:19 18:16:19 # Keep track of whether we cleanly exited the except block. This 18:16:19 # ensures we do proper cleanup in finally. 18:16:19 clean_exit = False 18:16:19 18:16:19 # Rewind body position, if needed. Record current position 18:16:19 # for future rewinds in the event of a redirect/retry. 18:16:19 body_pos = set_file_position(body, body_pos) 18:16:19 18:16:19 try: 18:16:19 # Request a connection from the queue. 18:16:19 timeout_obj = self._get_timeout(timeout) 18:16:19 conn = self._get_conn(timeout=pool_timeout) 18:16:19 18:16:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 18:16:19 18:16:19 # Is this a closed/new connection that requires CONNECT tunnelling? 18:16:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 18:16:19 try: 18:16:19 self._prepare_proxy(conn) 18:16:19 except (BaseSSLError, OSError, SocketTimeout) as e: 18:16:19 self._raise_timeout( 18:16:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 18:16:19 ) 18:16:19 raise 18:16:19 18:16:19 # If we're going to release the connection in ``finally:``, then 18:16:19 # the response doesn't need to know about the connection. Otherwise 18:16:19 # it will also try to release it and we'll have a double-release 18:16:19 # mess. 18:16:19 response_conn = conn if not release_conn else None 18:16:19 18:16:19 # Make the request on the HTTPConnection object 18:16:19 > response = self._make_request( 18:16:19 conn, 18:16:19 method, 18:16:19 url, 18:16:19 timeout=timeout_obj, 18:16:19 body=body, 18:16:19 headers=headers, 18:16:19 chunked=chunked, 18:16:19 retries=retries, 18:16:19 response_conn=response_conn, 18:16:19 preload_content=preload_content, 18:16:19 decode_content=decode_content, 18:16:19 **response_kw, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 18:16:19 conn.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 18:16:19 self.endheaders() 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 18:16:19 self._send_output(message_body, encode_chunked=encode_chunked) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 18:16:19 self.send(msg) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 18:16:19 self.connect() 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 18:16:19 self.sock = self._new_conn() 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 except socket.gaierror as e: 18:16:19 raise NameResolutionError(self.host, self, e) from e 18:16:19 except SocketTimeout as e: 18:16:19 raise ConnectTimeoutError( 18:16:19 self, 18:16:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 18:16:19 ) from e 18:16:19 18:16:19 except OSError as e: 18:16:19 > raise NewConnectionError( 18:16:19 self, f"Failed to establish a new connection: {e}" 18:16:19 ) from e 18:16:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 > resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 18:16:19 retries = retries.increment( 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 method = 'PUT' 18:16:19 url = '/rests/data/ietf-network:networks/network=openroadm-topology/ietf-network-topology:link=ROADMB01-DEG2-DEG2-TTP-TXRXtoROADMC01-DEG1-DEG1-TTP-TXRX/org-openroadm-network-topology:OMS-attributes/span' 18:16:19 response = None 18:16:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 18:16:19 _pool = 18:16:19 _stacktrace = 18:16:19 18:16:19 def increment( 18:16:19 self, 18:16:19 method: str | None = None, 18:16:19 url: str | None = None, 18:16:19 response: BaseHTTPResponse | None = None, 18:16:19 error: Exception | None = None, 18:16:19 _pool: ConnectionPool | None = None, 18:16:19 _stacktrace: TracebackType | None = None, 18:16:19 ) -> Self: 18:16:19 """Return a new Retry object with incremented retry counters. 18:16:19 18:16:19 :param response: A response object, or None, if the server did not 18:16:19 return a response. 18:16:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 18:16:19 :param Exception error: An error encountered during the request, or 18:16:19 None if the response was received successfully. 18:16:19 18:16:19 :return: A new ``Retry`` object. 18:16:19 """ 18:16:19 if self.total is False and error: 18:16:19 # Disabled, indicate to re-raise the error. 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 18:16:19 total = self.total 18:16:19 if total is not None: 18:16:19 total -= 1 18:16:19 18:16:19 connect = self.connect 18:16:19 read = self.read 18:16:19 redirect = self.redirect 18:16:19 status_count = self.status 18:16:19 other = self.other 18:16:19 cause = "unknown" 18:16:19 status = None 18:16:19 redirect_location = None 18:16:19 18:16:19 if error and self._is_connection_error(error): 18:16:19 # Connect retry? 18:16:19 if connect is False: 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif connect is not None: 18:16:19 connect -= 1 18:16:19 18:16:19 elif error and self._is_read_error(error): 18:16:19 # Read retry? 18:16:19 if read is False or method is None or not self._is_method_retryable(method): 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif read is not None: 18:16:19 read -= 1 18:16:19 18:16:19 elif error: 18:16:19 # Other retry? 18:16:19 if other is not None: 18:16:19 other -= 1 18:16:19 18:16:19 elif response and response.get_redirect_location(): 18:16:19 # Redirect retry? 18:16:19 if redirect is not None: 18:16:19 redirect -= 1 18:16:19 cause = "too many redirects" 18:16:19 response_redirect_location = response.get_redirect_location() 18:16:19 if response_redirect_location: 18:16:19 redirect_location = response_redirect_location 18:16:19 status = response.status 18:16:19 18:16:19 else: 18:16:19 # Incrementing because of a server error like a 500 in 18:16:19 # status_forcelist and the given method is in the allowed_methods 18:16:19 cause = ResponseError.GENERIC_ERROR 18:16:19 if response and response.status: 18:16:19 if status_count is not None: 18:16:19 status_count -= 1 18:16:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 18:16:19 status = response.status 18:16:19 18:16:19 history = self.history + ( 18:16:19 RequestHistory(method, url, error, status, redirect_location), 18:16:19 ) 18:16:19 18:16:19 new_retry = self.new( 18:16:19 total=total, 18:16:19 connect=connect, 18:16:19 read=read, 18:16:19 redirect=redirect, 18:16:19 status=status_count, 18:16:19 other=other, 18:16:19 history=history, 18:16:19 ) 18:16:19 18:16:19 if new_retry.is_exhausted(): 18:16:19 reason = error or ResponseError(cause) 18:16:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 18:16:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-topology/ietf-network-topology:link=ROADMB01-DEG2-DEG2-TTP-TXRXtoROADMC01-DEG1-DEG1-TTP-TXRX/org-openroadm-network-topology:OMS-attributes/span (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 18:16:19 18:16:19 During handling of the above exception, another exception occurred: 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_24_omsAttributes_ROADMB_ROADMC(self): 18:16:19 # Config ROADMB01-ROADMC01 oms-attributes 18:16:19 data = {"span": { 18:16:19 "auto-spanloss": "true", 18:16:19 "engineered-spanloss": 12.2, 18:16:19 "link-concatenation": [{ 18:16:19 "SRLG-Id": 0, 18:16:19 "fiber-type": "smf", 18:16:19 "SRLG-length": 100000, 18:16:19 "pmd": 0.5}]}} 18:16:19 > response = test_utils.add_oms_attr_request( 18:16:19 "ROADMB01-DEG2-DEG2-TTP-TXRXtoROADMC01-DEG1-DEG1-TTP-TXRX", data) 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:528: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 transportpce_tests/common/test_utils.py:558: in add_oms_attr_request 18:16:19 response = put_request(url2.format('{}', network, link), oms_attr) 18:16:19 transportpce_tests/common/test_utils.py:124: in put_request 18:16:19 return requests.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 18:16:19 return session.request(method=method, url=url, **kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 18:16:19 resp = self.send(prep, **send_kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 18:16:19 r = adapter.send(request, **kwargs) 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 except (ProtocolError, OSError) as err: 18:16:19 raise ConnectionError(err, request=request) 18:16:19 18:16:19 except MaxRetryError as e: 18:16:19 if isinstance(e.reason, ConnectTimeoutError): 18:16:19 # TODO: Remove this in 3.0.0: see #2811 18:16:19 if not isinstance(e.reason, NewConnectionError): 18:16:19 raise ConnectTimeout(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, ResponseError): 18:16:19 raise RetryError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _ProxyError): 18:16:19 raise ProxyError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _SSLError): 18:16:19 # This branch is for urllib3 v1.22 and later. 18:16:19 raise SSLError(e, request=request) 18:16:19 18:16:19 > raise ConnectionError(e, request=request) 18:16:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-topology/ietf-network-topology:link=ROADMB01-DEG2-DEG2-TTP-TXRXtoROADMC01-DEG1-DEG1-TTP-TXRX/org-openroadm-network-topology:OMS-attributes/span (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 18:16:19 _______ TransportPCETopologyTesting.test_25_omsAttributes_ROADMC_ROADMB ________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 > sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 18:16:19 raise err 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 address = ('localhost', 8182), timeout = 10, source_address = None 18:16:19 socket_options = [(6, 1, 1)] 18:16:19 18:16:19 def create_connection( 18:16:19 address: tuple[str, int], 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 source_address: tuple[str, int] | None = None, 18:16:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 18:16:19 ) -> socket.socket: 18:16:19 """Connect to *address* and return the socket object. 18:16:19 18:16:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 18:16:19 port)``) and return the socket object. Passing the optional 18:16:19 *timeout* parameter will set the timeout on the socket instance 18:16:19 before attempting to connect. If no *timeout* is supplied, the 18:16:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 18:16:19 is used. If *source_address* is set it must be a tuple of (host, port) 18:16:19 for the socket to bind as a source address before making the connection. 18:16:19 An host of '' or port 0 tells the OS to use the default. 18:16:19 """ 18:16:19 18:16:19 host, port = address 18:16:19 if host.startswith("["): 18:16:19 host = host.strip("[]") 18:16:19 err = None 18:16:19 18:16:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 18:16:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 18:16:19 # The original create_connection function always returns all records. 18:16:19 family = allowed_gai_family() 18:16:19 18:16:19 try: 18:16:19 host.encode("idna") 18:16:19 except UnicodeError: 18:16:19 raise LocationParseError(f"'{host}', label empty or too long") from None 18:16:19 18:16:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 18:16:19 af, socktype, proto, canonname, sa = res 18:16:19 sock = None 18:16:19 try: 18:16:19 sock = socket.socket(af, socktype, proto) 18:16:19 18:16:19 # If provided, set socket level options before connecting. 18:16:19 _set_socket_options(sock, socket_options) 18:16:19 18:16:19 if timeout is not _DEFAULT_TIMEOUT: 18:16:19 sock.settimeout(timeout) 18:16:19 if source_address: 18:16:19 sock.bind(source_address) 18:16:19 > sock.connect(sa) 18:16:19 E ConnectionRefusedError: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 method = 'PUT' 18:16:19 url = '/rests/data/ietf-network:networks/network=openroadm-topology/ietf-network-topology:link=ROADMC01-DEG1-DEG1-TTP-TXRXtoROADMB01-DEG2-DEG2-TTP-TXRX/org-openroadm-network-topology:OMS-attributes/span' 18:16:19 body = '{"span": {"auto-spanloss": "true", "engineered-spanloss": 12.2, "link-concatenation": [{"SRLG-Id": 0, "fiber-type": "smf", "SRLG-length": 100000, "pmd": 0.5}]}}' 18:16:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '160', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 18:16:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 redirect = False, assert_same_host = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 18:16:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 18:16:19 decode_content = False, response_kw = {} 18:16:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/ietf-network:networks/network=openroadm-topology/i...1-TTP-TXRXtoROADMB01-DEG2-DEG2-TTP-TXRX/org-openroadm-network-topology:OMS-attributes/span', query=None, fragment=None) 18:16:19 destination_scheme = None, conn = None, release_this_conn = True 18:16:19 http_tunnel_required = False, err = None, clean_exit = False 18:16:19 18:16:19 def urlopen( # type: ignore[override] 18:16:19 self, 18:16:19 method: str, 18:16:19 url: str, 18:16:19 body: _TYPE_BODY | None = None, 18:16:19 headers: typing.Mapping[str, str] | None = None, 18:16:19 retries: Retry | bool | int | None = None, 18:16:19 redirect: bool = True, 18:16:19 assert_same_host: bool = True, 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 pool_timeout: int | None = None, 18:16:19 release_conn: bool | None = None, 18:16:19 chunked: bool = False, 18:16:19 body_pos: _TYPE_BODY_POSITION | None = None, 18:16:19 preload_content: bool = True, 18:16:19 decode_content: bool = True, 18:16:19 **response_kw: typing.Any, 18:16:19 ) -> BaseHTTPResponse: 18:16:19 """ 18:16:19 Get a connection from the pool and perform an HTTP request. This is the 18:16:19 lowest level call for making a request, so you'll need to specify all 18:16:19 the raw details. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 More commonly, it's appropriate to use a convenience method 18:16:19 such as :meth:`request`. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 `release_conn` will only behave as expected if 18:16:19 `preload_content=False` because we want to make 18:16:19 `preload_content=False` the default behaviour someday soon without 18:16:19 breaking backwards compatibility. 18:16:19 18:16:19 :param method: 18:16:19 HTTP request method (such as GET, POST, PUT, etc.) 18:16:19 18:16:19 :param url: 18:16:19 The URL to perform the request on. 18:16:19 18:16:19 :param body: 18:16:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 18:16:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 18:16:19 18:16:19 :param headers: 18:16:19 Dictionary of custom headers to send, such as User-Agent, 18:16:19 If-None-Match, etc. If None, pool headers are used. If provided, 18:16:19 these headers completely replace any pool-specific headers. 18:16:19 18:16:19 :param retries: 18:16:19 Configure the number of retries to allow before raising a 18:16:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 18:16:19 18:16:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 18:16:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 18:16:19 over different types of retries. 18:16:19 Pass an integer number to retry connection errors that many times, 18:16:19 but no other types of errors. Pass zero to never retry. 18:16:19 18:16:19 If ``False``, then retries are disabled and any exception is raised 18:16:19 immediately. Also, instead of raising a MaxRetryError on redirects, 18:16:19 the redirect response will be returned. 18:16:19 18:16:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 18:16:19 18:16:19 :param redirect: 18:16:19 If True, automatically handle redirects (status codes 301, 302, 18:16:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 18:16:19 will disable redirect, too. 18:16:19 18:16:19 :param assert_same_host: 18:16:19 If ``True``, will make sure that the host of the pool requests is 18:16:19 consistent else will raise HostChangedError. When ``False``, you can 18:16:19 use the pool on an HTTP proxy and request foreign hosts. 18:16:19 18:16:19 :param timeout: 18:16:19 If specified, overrides the default timeout for this one 18:16:19 request. It may be a float (in seconds) or an instance of 18:16:19 :class:`urllib3.util.Timeout`. 18:16:19 18:16:19 :param pool_timeout: 18:16:19 If set and the pool is set to block=True, then this method will 18:16:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 18:16:19 connection is available within the time period. 18:16:19 18:16:19 :param bool preload_content: 18:16:19 If True, the response's body will be preloaded into memory. 18:16:19 18:16:19 :param bool decode_content: 18:16:19 If True, will attempt to decode the body based on the 18:16:19 'content-encoding' header. 18:16:19 18:16:19 :param release_conn: 18:16:19 If False, then the urlopen call will not release the connection 18:16:19 back into the pool once a response is received (but will release if 18:16:19 you read the entire contents of the response such as when 18:16:19 `preload_content=True`). This is useful if you're not preloading 18:16:19 the response's content immediately. You will need to call 18:16:19 ``r.release_conn()`` on the response ``r`` to return the connection 18:16:19 back into the pool. If None, it takes the value of ``preload_content`` 18:16:19 which defaults to ``True``. 18:16:19 18:16:19 :param bool chunked: 18:16:19 If True, urllib3 will send the body using chunked transfer 18:16:19 encoding. Otherwise, urllib3 will send the body using the standard 18:16:19 content-length form. Defaults to False. 18:16:19 18:16:19 :param int body_pos: 18:16:19 Position to seek to in file-like body in the event of a retry or 18:16:19 redirect. Typically this won't need to be set because urllib3 will 18:16:19 auto-populate the value when needed. 18:16:19 """ 18:16:19 parsed_url = parse_url(url) 18:16:19 destination_scheme = parsed_url.scheme 18:16:19 18:16:19 if headers is None: 18:16:19 headers = self.headers 18:16:19 18:16:19 if not isinstance(retries, Retry): 18:16:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 18:16:19 18:16:19 if release_conn is None: 18:16:19 release_conn = preload_content 18:16:19 18:16:19 # Check host 18:16:19 if assert_same_host and not self.is_same_host(url): 18:16:19 raise HostChangedError(self, url, retries) 18:16:19 18:16:19 # Ensure that the URL we're connecting to is properly encoded 18:16:19 if url.startswith("/"): 18:16:19 url = to_str(_encode_target(url)) 18:16:19 else: 18:16:19 url = to_str(parsed_url.url) 18:16:19 18:16:19 conn = None 18:16:19 18:16:19 # Track whether `conn` needs to be released before 18:16:19 # returning/raising/recursing. Update this variable if necessary, and 18:16:19 # leave `release_conn` constant throughout the function. That way, if 18:16:19 # the function recurses, the original value of `release_conn` will be 18:16:19 # passed down into the recursive call, and its value will be respected. 18:16:19 # 18:16:19 # See issue #651 [1] for details. 18:16:19 # 18:16:19 # [1] 18:16:19 release_this_conn = release_conn 18:16:19 18:16:19 http_tunnel_required = connection_requires_http_tunnel( 18:16:19 self.proxy, self.proxy_config, destination_scheme 18:16:19 ) 18:16:19 18:16:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 18:16:19 # have to copy the headers dict so we can safely change it without those 18:16:19 # changes being reflected in anyone else's copy. 18:16:19 if not http_tunnel_required: 18:16:19 headers = headers.copy() # type: ignore[attr-defined] 18:16:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 18:16:19 18:16:19 # Must keep the exception bound to a separate variable or else Python 3 18:16:19 # complains about UnboundLocalError. 18:16:19 err = None 18:16:19 18:16:19 # Keep track of whether we cleanly exited the except block. This 18:16:19 # ensures we do proper cleanup in finally. 18:16:19 clean_exit = False 18:16:19 18:16:19 # Rewind body position, if needed. Record current position 18:16:19 # for future rewinds in the event of a redirect/retry. 18:16:19 body_pos = set_file_position(body, body_pos) 18:16:19 18:16:19 try: 18:16:19 # Request a connection from the queue. 18:16:19 timeout_obj = self._get_timeout(timeout) 18:16:19 conn = self._get_conn(timeout=pool_timeout) 18:16:19 18:16:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 18:16:19 18:16:19 # Is this a closed/new connection that requires CONNECT tunnelling? 18:16:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 18:16:19 try: 18:16:19 self._prepare_proxy(conn) 18:16:19 except (BaseSSLError, OSError, SocketTimeout) as e: 18:16:19 self._raise_timeout( 18:16:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 18:16:19 ) 18:16:19 raise 18:16:19 18:16:19 # If we're going to release the connection in ``finally:``, then 18:16:19 # the response doesn't need to know about the connection. Otherwise 18:16:19 # it will also try to release it and we'll have a double-release 18:16:19 # mess. 18:16:19 response_conn = conn if not release_conn else None 18:16:19 18:16:19 # Make the request on the HTTPConnection object 18:16:19 > response = self._make_request( 18:16:19 conn, 18:16:19 method, 18:16:19 url, 18:16:19 timeout=timeout_obj, 18:16:19 body=body, 18:16:19 headers=headers, 18:16:19 chunked=chunked, 18:16:19 retries=retries, 18:16:19 response_conn=response_conn, 18:16:19 preload_content=preload_content, 18:16:19 decode_content=decode_content, 18:16:19 **response_kw, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 18:16:19 conn.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 18:16:19 self.endheaders() 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 18:16:19 self._send_output(message_body, encode_chunked=encode_chunked) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 18:16:19 self.send(msg) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 18:16:19 self.connect() 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 18:16:19 self.sock = self._new_conn() 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 except socket.gaierror as e: 18:16:19 raise NameResolutionError(self.host, self, e) from e 18:16:19 except SocketTimeout as e: 18:16:19 raise ConnectTimeoutError( 18:16:19 self, 18:16:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 18:16:19 ) from e 18:16:19 18:16:19 except OSError as e: 18:16:19 > raise NewConnectionError( 18:16:19 self, f"Failed to establish a new connection: {e}" 18:16:19 ) from e 18:16:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 > resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 18:16:19 retries = retries.increment( 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 method = 'PUT' 18:16:19 url = '/rests/data/ietf-network:networks/network=openroadm-topology/ietf-network-topology:link=ROADMC01-DEG1-DEG1-TTP-TXRXtoROADMB01-DEG2-DEG2-TTP-TXRX/org-openroadm-network-topology:OMS-attributes/span' 18:16:19 response = None 18:16:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 18:16:19 _pool = 18:16:19 _stacktrace = 18:16:19 18:16:19 def increment( 18:16:19 self, 18:16:19 method: str | None = None, 18:16:19 url: str | None = None, 18:16:19 response: BaseHTTPResponse | None = None, 18:16:19 error: Exception | None = None, 18:16:19 _pool: ConnectionPool | None = None, 18:16:19 _stacktrace: TracebackType | None = None, 18:16:19 ) -> Self: 18:16:19 """Return a new Retry object with incremented retry counters. 18:16:19 18:16:19 :param response: A response object, or None, if the server did not 18:16:19 return a response. 18:16:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 18:16:19 :param Exception error: An error encountered during the request, or 18:16:19 None if the response was received successfully. 18:16:19 18:16:19 :return: A new ``Retry`` object. 18:16:19 """ 18:16:19 if self.total is False and error: 18:16:19 # Disabled, indicate to re-raise the error. 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 18:16:19 total = self.total 18:16:19 if total is not None: 18:16:19 total -= 1 18:16:19 18:16:19 connect = self.connect 18:16:19 read = self.read 18:16:19 redirect = self.redirect 18:16:19 status_count = self.status 18:16:19 other = self.other 18:16:19 cause = "unknown" 18:16:19 status = None 18:16:19 redirect_location = None 18:16:19 18:16:19 if error and self._is_connection_error(error): 18:16:19 # Connect retry? 18:16:19 if connect is False: 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif connect is not None: 18:16:19 connect -= 1 18:16:19 18:16:19 elif error and self._is_read_error(error): 18:16:19 # Read retry? 18:16:19 if read is False or method is None or not self._is_method_retryable(method): 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif read is not None: 18:16:19 read -= 1 18:16:19 18:16:19 elif error: 18:16:19 # Other retry? 18:16:19 if other is not None: 18:16:19 other -= 1 18:16:19 18:16:19 elif response and response.get_redirect_location(): 18:16:19 # Redirect retry? 18:16:19 if redirect is not None: 18:16:19 redirect -= 1 18:16:19 cause = "too many redirects" 18:16:19 response_redirect_location = response.get_redirect_location() 18:16:19 if response_redirect_location: 18:16:19 redirect_location = response_redirect_location 18:16:19 status = response.status 18:16:19 18:16:19 else: 18:16:19 # Incrementing because of a server error like a 500 in 18:16:19 # status_forcelist and the given method is in the allowed_methods 18:16:19 cause = ResponseError.GENERIC_ERROR 18:16:19 if response and response.status: 18:16:19 if status_count is not None: 18:16:19 status_count -= 1 18:16:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 18:16:19 status = response.status 18:16:19 18:16:19 history = self.history + ( 18:16:19 RequestHistory(method, url, error, status, redirect_location), 18:16:19 ) 18:16:19 18:16:19 new_retry = self.new( 18:16:19 total=total, 18:16:19 connect=connect, 18:16:19 read=read, 18:16:19 redirect=redirect, 18:16:19 status=status_count, 18:16:19 other=other, 18:16:19 history=history, 18:16:19 ) 18:16:19 18:16:19 if new_retry.is_exhausted(): 18:16:19 reason = error or ResponseError(cause) 18:16:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 18:16:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-topology/ietf-network-topology:link=ROADMC01-DEG1-DEG1-TTP-TXRXtoROADMB01-DEG2-DEG2-TTP-TXRX/org-openroadm-network-topology:OMS-attributes/span (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 18:16:19 18:16:19 During handling of the above exception, another exception occurred: 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_25_omsAttributes_ROADMC_ROADMB(self): 18:16:19 # Config ROADMC01-ROADMB01 oms-attributes 18:16:19 data = {"span": { 18:16:19 "auto-spanloss": "true", 18:16:19 "engineered-spanloss": 12.2, 18:16:19 "link-concatenation": [{ 18:16:19 "SRLG-Id": 0, 18:16:19 "fiber-type": "smf", 18:16:19 "SRLG-length": 100000, 18:16:19 "pmd": 0.5}]}} 18:16:19 > response = test_utils.add_oms_attr_request( 18:16:19 "ROADMC01-DEG1-DEG1-TTP-TXRXtoROADMB01-DEG2-DEG2-TTP-TXRX", data) 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:542: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 transportpce_tests/common/test_utils.py:558: in add_oms_attr_request 18:16:19 response = put_request(url2.format('{}', network, link), oms_attr) 18:16:19 transportpce_tests/common/test_utils.py:124: in put_request 18:16:19 return requests.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 18:16:19 return session.request(method=method, url=url, **kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 18:16:19 resp = self.send(prep, **send_kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 18:16:19 r = adapter.send(request, **kwargs) 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 except (ProtocolError, OSError) as err: 18:16:19 raise ConnectionError(err, request=request) 18:16:19 18:16:19 except MaxRetryError as e: 18:16:19 if isinstance(e.reason, ConnectTimeoutError): 18:16:19 # TODO: Remove this in 3.0.0: see #2811 18:16:19 if not isinstance(e.reason, NewConnectionError): 18:16:19 raise ConnectTimeout(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, ResponseError): 18:16:19 raise RetryError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _ProxyError): 18:16:19 raise ProxyError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _SSLError): 18:16:19 # This branch is for urllib3 v1.22 and later. 18:16:19 raise SSLError(e, request=request) 18:16:19 18:16:19 > raise ConnectionError(e, request=request) 18:16:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-topology/ietf-network-topology:link=ROADMC01-DEG1-DEG1-TTP-TXRXtoROADMB01-DEG2-DEG2-TTP-TXRX/org-openroadm-network-topology:OMS-attributes/span (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 18:16:19 ______________ TransportPCETopologyTesting.test_26_getClliNetwork ______________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 > sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 18:16:19 raise err 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 address = ('localhost', 8182), timeout = 10, source_address = None 18:16:19 socket_options = [(6, 1, 1)] 18:16:19 18:16:19 def create_connection( 18:16:19 address: tuple[str, int], 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 source_address: tuple[str, int] | None = None, 18:16:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 18:16:19 ) -> socket.socket: 18:16:19 """Connect to *address* and return the socket object. 18:16:19 18:16:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 18:16:19 port)``) and return the socket object. Passing the optional 18:16:19 *timeout* parameter will set the timeout on the socket instance 18:16:19 before attempting to connect. If no *timeout* is supplied, the 18:16:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 18:16:19 is used. If *source_address* is set it must be a tuple of (host, port) 18:16:19 for the socket to bind as a source address before making the connection. 18:16:19 An host of '' or port 0 tells the OS to use the default. 18:16:19 """ 18:16:19 18:16:19 host, port = address 18:16:19 if host.startswith("["): 18:16:19 host = host.strip("[]") 18:16:19 err = None 18:16:19 18:16:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 18:16:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 18:16:19 # The original create_connection function always returns all records. 18:16:19 family = allowed_gai_family() 18:16:19 18:16:19 try: 18:16:19 host.encode("idna") 18:16:19 except UnicodeError: 18:16:19 raise LocationParseError(f"'{host}', label empty or too long") from None 18:16:19 18:16:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 18:16:19 af, socktype, proto, canonname, sa = res 18:16:19 sock = None 18:16:19 try: 18:16:19 sock = socket.socket(af, socktype, proto) 18:16:19 18:16:19 # If provided, set socket level options before connecting. 18:16:19 _set_socket_options(sock, socket_options) 18:16:19 18:16:19 if timeout is not _DEFAULT_TIMEOUT: 18:16:19 sock.settimeout(timeout) 18:16:19 if source_address: 18:16:19 sock.bind(source_address) 18:16:19 > sock.connect(sa) 18:16:19 E ConnectionRefusedError: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 method = 'GET' 18:16:19 url = '/rests/data/ietf-network:networks/network=clli-network?content=config' 18:16:19 body = None 18:16:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 18:16:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 redirect = False, assert_same_host = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 18:16:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 18:16:19 decode_content = False, response_kw = {} 18:16:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/ietf-network:networks/network=clli-network', query='content=config', fragment=None) 18:16:19 destination_scheme = None, conn = None, release_this_conn = True 18:16:19 http_tunnel_required = False, err = None, clean_exit = False 18:16:19 18:16:19 def urlopen( # type: ignore[override] 18:16:19 self, 18:16:19 method: str, 18:16:19 url: str, 18:16:19 body: _TYPE_BODY | None = None, 18:16:19 headers: typing.Mapping[str, str] | None = None, 18:16:19 retries: Retry | bool | int | None = None, 18:16:19 redirect: bool = True, 18:16:19 assert_same_host: bool = True, 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 pool_timeout: int | None = None, 18:16:19 release_conn: bool | None = None, 18:16:19 chunked: bool = False, 18:16:19 body_pos: _TYPE_BODY_POSITION | None = None, 18:16:19 preload_content: bool = True, 18:16:19 decode_content: bool = True, 18:16:19 **response_kw: typing.Any, 18:16:19 ) -> BaseHTTPResponse: 18:16:19 """ 18:16:19 Get a connection from the pool and perform an HTTP request. This is the 18:16:19 lowest level call for making a request, so you'll need to specify all 18:16:19 the raw details. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 More commonly, it's appropriate to use a convenience method 18:16:19 such as :meth:`request`. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 `release_conn` will only behave as expected if 18:16:19 `preload_content=False` because we want to make 18:16:19 `preload_content=False` the default behaviour someday soon without 18:16:19 breaking backwards compatibility. 18:16:19 18:16:19 :param method: 18:16:19 HTTP request method (such as GET, POST, PUT, etc.) 18:16:19 18:16:19 :param url: 18:16:19 The URL to perform the request on. 18:16:19 18:16:19 :param body: 18:16:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 18:16:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 18:16:19 18:16:19 :param headers: 18:16:19 Dictionary of custom headers to send, such as User-Agent, 18:16:19 If-None-Match, etc. If None, pool headers are used. If provided, 18:16:19 these headers completely replace any pool-specific headers. 18:16:19 18:16:19 :param retries: 18:16:19 Configure the number of retries to allow before raising a 18:16:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 18:16:19 18:16:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 18:16:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 18:16:19 over different types of retries. 18:16:19 Pass an integer number to retry connection errors that many times, 18:16:19 but no other types of errors. Pass zero to never retry. 18:16:19 18:16:19 If ``False``, then retries are disabled and any exception is raised 18:16:19 immediately. Also, instead of raising a MaxRetryError on redirects, 18:16:19 the redirect response will be returned. 18:16:19 18:16:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 18:16:19 18:16:19 :param redirect: 18:16:19 If True, automatically handle redirects (status codes 301, 302, 18:16:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 18:16:19 will disable redirect, too. 18:16:19 18:16:19 :param assert_same_host: 18:16:19 If ``True``, will make sure that the host of the pool requests is 18:16:19 consistent else will raise HostChangedError. When ``False``, you can 18:16:19 use the pool on an HTTP proxy and request foreign hosts. 18:16:19 18:16:19 :param timeout: 18:16:19 If specified, overrides the default timeout for this one 18:16:19 request. It may be a float (in seconds) or an instance of 18:16:19 :class:`urllib3.util.Timeout`. 18:16:19 18:16:19 :param pool_timeout: 18:16:19 If set and the pool is set to block=True, then this method will 18:16:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 18:16:19 connection is available within the time period. 18:16:19 18:16:19 :param bool preload_content: 18:16:19 If True, the response's body will be preloaded into memory. 18:16:19 18:16:19 :param bool decode_content: 18:16:19 If True, will attempt to decode the body based on the 18:16:19 'content-encoding' header. 18:16:19 18:16:19 :param release_conn: 18:16:19 If False, then the urlopen call will not release the connection 18:16:19 back into the pool once a response is received (but will release if 18:16:19 you read the entire contents of the response such as when 18:16:19 `preload_content=True`). This is useful if you're not preloading 18:16:19 the response's content immediately. You will need to call 18:16:19 ``r.release_conn()`` on the response ``r`` to return the connection 18:16:19 back into the pool. If None, it takes the value of ``preload_content`` 18:16:19 which defaults to ``True``. 18:16:19 18:16:19 :param bool chunked: 18:16:19 If True, urllib3 will send the body using chunked transfer 18:16:19 encoding. Otherwise, urllib3 will send the body using the standard 18:16:19 content-length form. Defaults to False. 18:16:19 18:16:19 :param int body_pos: 18:16:19 Position to seek to in file-like body in the event of a retry or 18:16:19 redirect. Typically this won't need to be set because urllib3 will 18:16:19 auto-populate the value when needed. 18:16:19 """ 18:16:19 parsed_url = parse_url(url) 18:16:19 destination_scheme = parsed_url.scheme 18:16:19 18:16:19 if headers is None: 18:16:19 headers = self.headers 18:16:19 18:16:19 if not isinstance(retries, Retry): 18:16:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 18:16:19 18:16:19 if release_conn is None: 18:16:19 release_conn = preload_content 18:16:19 18:16:19 # Check host 18:16:19 if assert_same_host and not self.is_same_host(url): 18:16:19 raise HostChangedError(self, url, retries) 18:16:19 18:16:19 # Ensure that the URL we're connecting to is properly encoded 18:16:19 if url.startswith("/"): 18:16:19 url = to_str(_encode_target(url)) 18:16:19 else: 18:16:19 url = to_str(parsed_url.url) 18:16:19 18:16:19 conn = None 18:16:19 18:16:19 # Track whether `conn` needs to be released before 18:16:19 # returning/raising/recursing. Update this variable if necessary, and 18:16:19 # leave `release_conn` constant throughout the function. That way, if 18:16:19 # the function recurses, the original value of `release_conn` will be 18:16:19 # passed down into the recursive call, and its value will be respected. 18:16:19 # 18:16:19 # See issue #651 [1] for details. 18:16:19 # 18:16:19 # [1] 18:16:19 release_this_conn = release_conn 18:16:19 18:16:19 http_tunnel_required = connection_requires_http_tunnel( 18:16:19 self.proxy, self.proxy_config, destination_scheme 18:16:19 ) 18:16:19 18:16:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 18:16:19 # have to copy the headers dict so we can safely change it without those 18:16:19 # changes being reflected in anyone else's copy. 18:16:19 if not http_tunnel_required: 18:16:19 headers = headers.copy() # type: ignore[attr-defined] 18:16:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 18:16:19 18:16:19 # Must keep the exception bound to a separate variable or else Python 3 18:16:19 # complains about UnboundLocalError. 18:16:19 err = None 18:16:19 18:16:19 # Keep track of whether we cleanly exited the except block. This 18:16:19 # ensures we do proper cleanup in finally. 18:16:19 clean_exit = False 18:16:19 18:16:19 # Rewind body position, if needed. Record current position 18:16:19 # for future rewinds in the event of a redirect/retry. 18:16:19 body_pos = set_file_position(body, body_pos) 18:16:19 18:16:19 try: 18:16:19 # Request a connection from the queue. 18:16:19 timeout_obj = self._get_timeout(timeout) 18:16:19 conn = self._get_conn(timeout=pool_timeout) 18:16:19 18:16:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 18:16:19 18:16:19 # Is this a closed/new connection that requires CONNECT tunnelling? 18:16:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 18:16:19 try: 18:16:19 self._prepare_proxy(conn) 18:16:19 except (BaseSSLError, OSError, SocketTimeout) as e: 18:16:19 self._raise_timeout( 18:16:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 18:16:19 ) 18:16:19 raise 18:16:19 18:16:19 # If we're going to release the connection in ``finally:``, then 18:16:19 # the response doesn't need to know about the connection. Otherwise 18:16:19 # it will also try to release it and we'll have a double-release 18:16:19 # mess. 18:16:19 response_conn = conn if not release_conn else None 18:16:19 18:16:19 # Make the request on the HTTPConnection object 18:16:19 > response = self._make_request( 18:16:19 conn, 18:16:19 method, 18:16:19 url, 18:16:19 timeout=timeout_obj, 18:16:19 body=body, 18:16:19 headers=headers, 18:16:19 chunked=chunked, 18:16:19 retries=retries, 18:16:19 response_conn=response_conn, 18:16:19 preload_content=preload_content, 18:16:19 decode_content=decode_content, 18:16:19 **response_kw, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 18:16:19 conn.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 18:16:19 self.endheaders() 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 18:16:19 self._send_output(message_body, encode_chunked=encode_chunked) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 18:16:19 self.send(msg) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 18:16:19 self.connect() 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 18:16:19 self.sock = self._new_conn() 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 except socket.gaierror as e: 18:16:19 raise NameResolutionError(self.host, self, e) from e 18:16:19 except SocketTimeout as e: 18:16:19 raise ConnectTimeoutError( 18:16:19 self, 18:16:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 18:16:19 ) from e 18:16:19 18:16:19 except OSError as e: 18:16:19 > raise NewConnectionError( 18:16:19 self, f"Failed to establish a new connection: {e}" 18:16:19 ) from e 18:16:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 > resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 18:16:19 retries = retries.increment( 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 method = 'GET' 18:16:19 url = '/rests/data/ietf-network:networks/network=clli-network?content=config' 18:16:19 response = None 18:16:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 18:16:19 _pool = 18:16:19 _stacktrace = 18:16:19 18:16:19 def increment( 18:16:19 self, 18:16:19 method: str | None = None, 18:16:19 url: str | None = None, 18:16:19 response: BaseHTTPResponse | None = None, 18:16:19 error: Exception | None = None, 18:16:19 _pool: ConnectionPool | None = None, 18:16:19 _stacktrace: TracebackType | None = None, 18:16:19 ) -> Self: 18:16:19 """Return a new Retry object with incremented retry counters. 18:16:19 18:16:19 :param response: A response object, or None, if the server did not 18:16:19 return a response. 18:16:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 18:16:19 :param Exception error: An error encountered during the request, or 18:16:19 None if the response was received successfully. 18:16:19 18:16:19 :return: A new ``Retry`` object. 18:16:19 """ 18:16:19 if self.total is False and error: 18:16:19 # Disabled, indicate to re-raise the error. 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 18:16:19 total = self.total 18:16:19 if total is not None: 18:16:19 total -= 1 18:16:19 18:16:19 connect = self.connect 18:16:19 read = self.read 18:16:19 redirect = self.redirect 18:16:19 status_count = self.status 18:16:19 other = self.other 18:16:19 cause = "unknown" 18:16:19 status = None 18:16:19 redirect_location = None 18:16:19 18:16:19 if error and self._is_connection_error(error): 18:16:19 # Connect retry? 18:16:19 if connect is False: 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif connect is not None: 18:16:19 connect -= 1 18:16:19 18:16:19 elif error and self._is_read_error(error): 18:16:19 # Read retry? 18:16:19 if read is False or method is None or not self._is_method_retryable(method): 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif read is not None: 18:16:19 read -= 1 18:16:19 18:16:19 elif error: 18:16:19 # Other retry? 18:16:19 if other is not None: 18:16:19 other -= 1 18:16:19 18:16:19 elif response and response.get_redirect_location(): 18:16:19 # Redirect retry? 18:16:19 if redirect is not None: 18:16:19 redirect -= 1 18:16:19 cause = "too many redirects" 18:16:19 response_redirect_location = response.get_redirect_location() 18:16:19 if response_redirect_location: 18:16:19 redirect_location = response_redirect_location 18:16:19 status = response.status 18:16:19 18:16:19 else: 18:16:19 # Incrementing because of a server error like a 500 in 18:16:19 # status_forcelist and the given method is in the allowed_methods 18:16:19 cause = ResponseError.GENERIC_ERROR 18:16:19 if response and response.status: 18:16:19 if status_count is not None: 18:16:19 status_count -= 1 18:16:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 18:16:19 status = response.status 18:16:19 18:16:19 history = self.history + ( 18:16:19 RequestHistory(method, url, error, status, redirect_location), 18:16:19 ) 18:16:19 18:16:19 new_retry = self.new( 18:16:19 total=total, 18:16:19 connect=connect, 18:16:19 read=read, 18:16:19 redirect=redirect, 18:16:19 status=status_count, 18:16:19 other=other, 18:16:19 history=history, 18:16:19 ) 18:16:19 18:16:19 if new_retry.is_exhausted(): 18:16:19 reason = error or ResponseError(cause) 18:16:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 18:16:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=clli-network?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 18:16:19 18:16:19 During handling of the above exception, another exception occurred: 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_26_getClliNetwork(self): 18:16:19 > response = test_utils.get_ietf_network_request('clli-network', 'config') 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:547: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 transportpce_tests/common/test_utils.py:495: in get_ietf_network_request 18:16:19 response = get_request(url[RESTCONF_VERSION].format(*format_args)) 18:16:19 transportpce_tests/common/test_utils.py:116: in get_request 18:16:19 return requests.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 18:16:19 return session.request(method=method, url=url, **kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 18:16:19 resp = self.send(prep, **send_kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 18:16:19 r = adapter.send(request, **kwargs) 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 except (ProtocolError, OSError) as err: 18:16:19 raise ConnectionError(err, request=request) 18:16:19 18:16:19 except MaxRetryError as e: 18:16:19 if isinstance(e.reason, ConnectTimeoutError): 18:16:19 # TODO: Remove this in 3.0.0: see #2811 18:16:19 if not isinstance(e.reason, NewConnectionError): 18:16:19 raise ConnectTimeout(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, ResponseError): 18:16:19 raise RetryError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _ProxyError): 18:16:19 raise ProxyError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _SSLError): 18:16:19 # This branch is for urllib3 v1.22 and later. 18:16:19 raise SSLError(e, request=request) 18:16:19 18:16:19 > raise ConnectionError(e, request=request) 18:16:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=clli-network?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 18:16:19 _______________ TransportPCETopologyTesting.test_27_verifyDegree _______________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 > sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 18:16:19 raise err 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 address = ('localhost', 8182), timeout = 10, source_address = None 18:16:19 socket_options = [(6, 1, 1)] 18:16:19 18:16:19 def create_connection( 18:16:19 address: tuple[str, int], 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 source_address: tuple[str, int] | None = None, 18:16:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 18:16:19 ) -> socket.socket: 18:16:19 """Connect to *address* and return the socket object. 18:16:19 18:16:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 18:16:19 port)``) and return the socket object. Passing the optional 18:16:19 *timeout* parameter will set the timeout on the socket instance 18:16:19 before attempting to connect. If no *timeout* is supplied, the 18:16:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 18:16:19 is used. If *source_address* is set it must be a tuple of (host, port) 18:16:19 for the socket to bind as a source address before making the connection. 18:16:19 An host of '' or port 0 tells the OS to use the default. 18:16:19 """ 18:16:19 18:16:19 host, port = address 18:16:19 if host.startswith("["): 18:16:19 host = host.strip("[]") 18:16:19 err = None 18:16:19 18:16:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 18:16:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 18:16:19 # The original create_connection function always returns all records. 18:16:19 family = allowed_gai_family() 18:16:19 18:16:19 try: 18:16:19 host.encode("idna") 18:16:19 except UnicodeError: 18:16:19 raise LocationParseError(f"'{host}', label empty or too long") from None 18:16:19 18:16:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 18:16:19 af, socktype, proto, canonname, sa = res 18:16:19 sock = None 18:16:19 try: 18:16:19 sock = socket.socket(af, socktype, proto) 18:16:19 18:16:19 # If provided, set socket level options before connecting. 18:16:19 _set_socket_options(sock, socket_options) 18:16:19 18:16:19 if timeout is not _DEFAULT_TIMEOUT: 18:16:19 sock.settimeout(timeout) 18:16:19 if source_address: 18:16:19 sock.bind(source_address) 18:16:19 > sock.connect(sa) 18:16:19 E ConnectionRefusedError: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 method = 'GET' 18:16:19 url = '/rests/data/ietf-network:networks/network=openroadm-topology?content=config' 18:16:19 body = None 18:16:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 18:16:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 redirect = False, assert_same_host = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 18:16:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 18:16:19 decode_content = False, response_kw = {} 18:16:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/ietf-network:networks/network=openroadm-topology', query='content=config', fragment=None) 18:16:19 destination_scheme = None, conn = None, release_this_conn = True 18:16:19 http_tunnel_required = False, err = None, clean_exit = False 18:16:19 18:16:19 def urlopen( # type: ignore[override] 18:16:19 self, 18:16:19 method: str, 18:16:19 url: str, 18:16:19 body: _TYPE_BODY | None = None, 18:16:19 headers: typing.Mapping[str, str] | None = None, 18:16:19 retries: Retry | bool | int | None = None, 18:16:19 redirect: bool = True, 18:16:19 assert_same_host: bool = True, 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 pool_timeout: int | None = None, 18:16:19 release_conn: bool | None = None, 18:16:19 chunked: bool = False, 18:16:19 body_pos: _TYPE_BODY_POSITION | None = None, 18:16:19 preload_content: bool = True, 18:16:19 decode_content: bool = True, 18:16:19 **response_kw: typing.Any, 18:16:19 ) -> BaseHTTPResponse: 18:16:19 """ 18:16:19 Get a connection from the pool and perform an HTTP request. This is the 18:16:19 lowest level call for making a request, so you'll need to specify all 18:16:19 the raw details. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 More commonly, it's appropriate to use a convenience method 18:16:19 such as :meth:`request`. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 `release_conn` will only behave as expected if 18:16:19 `preload_content=False` because we want to make 18:16:19 `preload_content=False` the default behaviour someday soon without 18:16:19 breaking backwards compatibility. 18:16:19 18:16:19 :param method: 18:16:19 HTTP request method (such as GET, POST, PUT, etc.) 18:16:19 18:16:19 :param url: 18:16:19 The URL to perform the request on. 18:16:19 18:16:19 :param body: 18:16:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 18:16:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 18:16:19 18:16:19 :param headers: 18:16:19 Dictionary of custom headers to send, such as User-Agent, 18:16:19 If-None-Match, etc. If None, pool headers are used. If provided, 18:16:19 these headers completely replace any pool-specific headers. 18:16:19 18:16:19 :param retries: 18:16:19 Configure the number of retries to allow before raising a 18:16:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 18:16:19 18:16:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 18:16:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 18:16:19 over different types of retries. 18:16:19 Pass an integer number to retry connection errors that many times, 18:16:19 but no other types of errors. Pass zero to never retry. 18:16:19 18:16:19 If ``False``, then retries are disabled and any exception is raised 18:16:19 immediately. Also, instead of raising a MaxRetryError on redirects, 18:16:19 the redirect response will be returned. 18:16:19 18:16:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 18:16:19 18:16:19 :param redirect: 18:16:19 If True, automatically handle redirects (status codes 301, 302, 18:16:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 18:16:19 will disable redirect, too. 18:16:19 18:16:19 :param assert_same_host: 18:16:19 If ``True``, will make sure that the host of the pool requests is 18:16:19 consistent else will raise HostChangedError. When ``False``, you can 18:16:19 use the pool on an HTTP proxy and request foreign hosts. 18:16:19 18:16:19 :param timeout: 18:16:19 If specified, overrides the default timeout for this one 18:16:19 request. It may be a float (in seconds) or an instance of 18:16:19 :class:`urllib3.util.Timeout`. 18:16:19 18:16:19 :param pool_timeout: 18:16:19 If set and the pool is set to block=True, then this method will 18:16:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 18:16:19 connection is available within the time period. 18:16:19 18:16:19 :param bool preload_content: 18:16:19 If True, the response's body will be preloaded into memory. 18:16:19 18:16:19 :param bool decode_content: 18:16:19 If True, will attempt to decode the body based on the 18:16:19 'content-encoding' header. 18:16:19 18:16:19 :param release_conn: 18:16:19 If False, then the urlopen call will not release the connection 18:16:19 back into the pool once a response is received (but will release if 18:16:19 you read the entire contents of the response such as when 18:16:19 `preload_content=True`). This is useful if you're not preloading 18:16:19 the response's content immediately. You will need to call 18:16:19 ``r.release_conn()`` on the response ``r`` to return the connection 18:16:19 back into the pool. If None, it takes the value of ``preload_content`` 18:16:19 which defaults to ``True``. 18:16:19 18:16:19 :param bool chunked: 18:16:19 If True, urllib3 will send the body using chunked transfer 18:16:19 encoding. Otherwise, urllib3 will send the body using the standard 18:16:19 content-length form. Defaults to False. 18:16:19 18:16:19 :param int body_pos: 18:16:19 Position to seek to in file-like body in the event of a retry or 18:16:19 redirect. Typically this won't need to be set because urllib3 will 18:16:19 auto-populate the value when needed. 18:16:19 """ 18:16:19 parsed_url = parse_url(url) 18:16:19 destination_scheme = parsed_url.scheme 18:16:19 18:16:19 if headers is None: 18:16:19 headers = self.headers 18:16:19 18:16:19 if not isinstance(retries, Retry): 18:16:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 18:16:19 18:16:19 if release_conn is None: 18:16:19 release_conn = preload_content 18:16:19 18:16:19 # Check host 18:16:19 if assert_same_host and not self.is_same_host(url): 18:16:19 raise HostChangedError(self, url, retries) 18:16:19 18:16:19 # Ensure that the URL we're connecting to is properly encoded 18:16:19 if url.startswith("/"): 18:16:19 url = to_str(_encode_target(url)) 18:16:19 else: 18:16:19 url = to_str(parsed_url.url) 18:16:19 18:16:19 conn = None 18:16:19 18:16:19 # Track whether `conn` needs to be released before 18:16:19 # returning/raising/recursing. Update this variable if necessary, and 18:16:19 # leave `release_conn` constant throughout the function. That way, if 18:16:19 # the function recurses, the original value of `release_conn` will be 18:16:19 # passed down into the recursive call, and its value will be respected. 18:16:19 # 18:16:19 # See issue #651 [1] for details. 18:16:19 # 18:16:19 # [1] 18:16:19 release_this_conn = release_conn 18:16:19 18:16:19 http_tunnel_required = connection_requires_http_tunnel( 18:16:19 self.proxy, self.proxy_config, destination_scheme 18:16:19 ) 18:16:19 18:16:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 18:16:19 # have to copy the headers dict so we can safely change it without those 18:16:19 # changes being reflected in anyone else's copy. 18:16:19 if not http_tunnel_required: 18:16:19 headers = headers.copy() # type: ignore[attr-defined] 18:16:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 18:16:19 18:16:19 # Must keep the exception bound to a separate variable or else Python 3 18:16:19 # complains about UnboundLocalError. 18:16:19 err = None 18:16:19 18:16:19 # Keep track of whether we cleanly exited the except block. This 18:16:19 # ensures we do proper cleanup in finally. 18:16:19 clean_exit = False 18:16:19 18:16:19 # Rewind body position, if needed. Record current position 18:16:19 # for future rewinds in the event of a redirect/retry. 18:16:19 body_pos = set_file_position(body, body_pos) 18:16:19 18:16:19 try: 18:16:19 # Request a connection from the queue. 18:16:19 timeout_obj = self._get_timeout(timeout) 18:16:19 conn = self._get_conn(timeout=pool_timeout) 18:16:19 18:16:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 18:16:19 18:16:19 # Is this a closed/new connection that requires CONNECT tunnelling? 18:16:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 18:16:19 try: 18:16:19 self._prepare_proxy(conn) 18:16:19 except (BaseSSLError, OSError, SocketTimeout) as e: 18:16:19 self._raise_timeout( 18:16:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 18:16:19 ) 18:16:19 raise 18:16:19 18:16:19 # If we're going to release the connection in ``finally:``, then 18:16:19 # the response doesn't need to know about the connection. Otherwise 18:16:19 # it will also try to release it and we'll have a double-release 18:16:19 # mess. 18:16:19 response_conn = conn if not release_conn else None 18:16:19 18:16:19 # Make the request on the HTTPConnection object 18:16:19 > response = self._make_request( 18:16:19 conn, 18:16:19 method, 18:16:19 url, 18:16:19 timeout=timeout_obj, 18:16:19 body=body, 18:16:19 headers=headers, 18:16:19 chunked=chunked, 18:16:19 retries=retries, 18:16:19 response_conn=response_conn, 18:16:19 preload_content=preload_content, 18:16:19 decode_content=decode_content, 18:16:19 **response_kw, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 18:16:19 conn.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 18:16:19 self.endheaders() 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 18:16:19 self._send_output(message_body, encode_chunked=encode_chunked) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 18:16:19 self.send(msg) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 18:16:19 self.connect() 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 18:16:19 self.sock = self._new_conn() 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 except socket.gaierror as e: 18:16:19 raise NameResolutionError(self.host, self, e) from e 18:16:19 except SocketTimeout as e: 18:16:19 raise ConnectTimeoutError( 18:16:19 self, 18:16:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 18:16:19 ) from e 18:16:19 18:16:19 except OSError as e: 18:16:19 > raise NewConnectionError( 18:16:19 self, f"Failed to establish a new connection: {e}" 18:16:19 ) from e 18:16:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 > resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 18:16:19 retries = retries.increment( 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 method = 'GET' 18:16:19 url = '/rests/data/ietf-network:networks/network=openroadm-topology?content=config' 18:16:19 response = None 18:16:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 18:16:19 _pool = 18:16:19 _stacktrace = 18:16:19 18:16:19 def increment( 18:16:19 self, 18:16:19 method: str | None = None, 18:16:19 url: str | None = None, 18:16:19 response: BaseHTTPResponse | None = None, 18:16:19 error: Exception | None = None, 18:16:19 _pool: ConnectionPool | None = None, 18:16:19 _stacktrace: TracebackType | None = None, 18:16:19 ) -> Self: 18:16:19 """Return a new Retry object with incremented retry counters. 18:16:19 18:16:19 :param response: A response object, or None, if the server did not 18:16:19 return a response. 18:16:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 18:16:19 :param Exception error: An error encountered during the request, or 18:16:19 None if the response was received successfully. 18:16:19 18:16:19 :return: A new ``Retry`` object. 18:16:19 """ 18:16:19 if self.total is False and error: 18:16:19 # Disabled, indicate to re-raise the error. 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 18:16:19 total = self.total 18:16:19 if total is not None: 18:16:19 total -= 1 18:16:19 18:16:19 connect = self.connect 18:16:19 read = self.read 18:16:19 redirect = self.redirect 18:16:19 status_count = self.status 18:16:19 other = self.other 18:16:19 cause = "unknown" 18:16:19 status = None 18:16:19 redirect_location = None 18:16:19 18:16:19 if error and self._is_connection_error(error): 18:16:19 # Connect retry? 18:16:19 if connect is False: 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif connect is not None: 18:16:19 connect -= 1 18:16:19 18:16:19 elif error and self._is_read_error(error): 18:16:19 # Read retry? 18:16:19 if read is False or method is None or not self._is_method_retryable(method): 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif read is not None: 18:16:19 read -= 1 18:16:19 18:16:19 elif error: 18:16:19 # Other retry? 18:16:19 if other is not None: 18:16:19 other -= 1 18:16:19 18:16:19 elif response and response.get_redirect_location(): 18:16:19 # Redirect retry? 18:16:19 if redirect is not None: 18:16:19 redirect -= 1 18:16:19 cause = "too many redirects" 18:16:19 response_redirect_location = response.get_redirect_location() 18:16:19 if response_redirect_location: 18:16:19 redirect_location = response_redirect_location 18:16:19 status = response.status 18:16:19 18:16:19 else: 18:16:19 # Incrementing because of a server error like a 500 in 18:16:19 # status_forcelist and the given method is in the allowed_methods 18:16:19 cause = ResponseError.GENERIC_ERROR 18:16:19 if response and response.status: 18:16:19 if status_count is not None: 18:16:19 status_count -= 1 18:16:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 18:16:19 status = response.status 18:16:19 18:16:19 history = self.history + ( 18:16:19 RequestHistory(method, url, error, status, redirect_location), 18:16:19 ) 18:16:19 18:16:19 new_retry = self.new( 18:16:19 total=total, 18:16:19 connect=connect, 18:16:19 read=read, 18:16:19 redirect=redirect, 18:16:19 status=status_count, 18:16:19 other=other, 18:16:19 history=history, 18:16:19 ) 18:16:19 18:16:19 if new_retry.is_exhausted(): 18:16:19 reason = error or ResponseError(cause) 18:16:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 18:16:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-topology?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 18:16:19 18:16:19 During handling of the above exception, another exception occurred: 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_27_verifyDegree(self): 18:16:19 > response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:561: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 transportpce_tests/common/test_utils.py:495: in get_ietf_network_request 18:16:19 response = get_request(url[RESTCONF_VERSION].format(*format_args)) 18:16:19 transportpce_tests/common/test_utils.py:116: in get_request 18:16:19 return requests.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 18:16:19 return session.request(method=method, url=url, **kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 18:16:19 resp = self.send(prep, **send_kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 18:16:19 r = adapter.send(request, **kwargs) 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 except (ProtocolError, OSError) as err: 18:16:19 raise ConnectionError(err, request=request) 18:16:19 18:16:19 except MaxRetryError as e: 18:16:19 if isinstance(e.reason, ConnectTimeoutError): 18:16:19 # TODO: Remove this in 3.0.0: see #2811 18:16:19 if not isinstance(e.reason, NewConnectionError): 18:16:19 raise ConnectTimeout(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, ResponseError): 18:16:19 raise RetryError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _ProxyError): 18:16:19 raise ProxyError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _SSLError): 18:16:19 # This branch is for urllib3 v1.22 and later. 18:16:19 raise SSLError(e, request=request) 18:16:19 18:16:19 > raise ConnectionError(e, request=request) 18:16:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-topology?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 18:16:19 ________ TransportPCETopologyTesting.test_28_verifyOppositeLinkTopology ________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 > sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 18:16:19 raise err 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 address = ('localhost', 8182), timeout = 10, source_address = None 18:16:19 socket_options = [(6, 1, 1)] 18:16:19 18:16:19 def create_connection( 18:16:19 address: tuple[str, int], 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 source_address: tuple[str, int] | None = None, 18:16:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 18:16:19 ) -> socket.socket: 18:16:19 """Connect to *address* and return the socket object. 18:16:19 18:16:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 18:16:19 port)``) and return the socket object. Passing the optional 18:16:19 *timeout* parameter will set the timeout on the socket instance 18:16:19 before attempting to connect. If no *timeout* is supplied, the 18:16:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 18:16:19 is used. If *source_address* is set it must be a tuple of (host, port) 18:16:19 for the socket to bind as a source address before making the connection. 18:16:19 An host of '' or port 0 tells the OS to use the default. 18:16:19 """ 18:16:19 18:16:19 host, port = address 18:16:19 if host.startswith("["): 18:16:19 host = host.strip("[]") 18:16:19 err = None 18:16:19 18:16:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 18:16:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 18:16:19 # The original create_connection function always returns all records. 18:16:19 family = allowed_gai_family() 18:16:19 18:16:19 try: 18:16:19 host.encode("idna") 18:16:19 except UnicodeError: 18:16:19 raise LocationParseError(f"'{host}', label empty or too long") from None 18:16:19 18:16:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 18:16:19 af, socktype, proto, canonname, sa = res 18:16:19 sock = None 18:16:19 try: 18:16:19 sock = socket.socket(af, socktype, proto) 18:16:19 18:16:19 # If provided, set socket level options before connecting. 18:16:19 _set_socket_options(sock, socket_options) 18:16:19 18:16:19 if timeout is not _DEFAULT_TIMEOUT: 18:16:19 sock.settimeout(timeout) 18:16:19 if source_address: 18:16:19 sock.bind(source_address) 18:16:19 > sock.connect(sa) 18:16:19 E ConnectionRefusedError: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 method = 'GET' 18:16:19 url = '/rests/data/ietf-network:networks/network=openroadm-topology?content=config' 18:16:19 body = None 18:16:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 18:16:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 redirect = False, assert_same_host = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 18:16:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 18:16:19 decode_content = False, response_kw = {} 18:16:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/ietf-network:networks/network=openroadm-topology', query='content=config', fragment=None) 18:16:19 destination_scheme = None, conn = None, release_this_conn = True 18:16:19 http_tunnel_required = False, err = None, clean_exit = False 18:16:19 18:16:19 def urlopen( # type: ignore[override] 18:16:19 self, 18:16:19 method: str, 18:16:19 url: str, 18:16:19 body: _TYPE_BODY | None = None, 18:16:19 headers: typing.Mapping[str, str] | None = None, 18:16:19 retries: Retry | bool | int | None = None, 18:16:19 redirect: bool = True, 18:16:19 assert_same_host: bool = True, 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 pool_timeout: int | None = None, 18:16:19 release_conn: bool | None = None, 18:16:19 chunked: bool = False, 18:16:19 body_pos: _TYPE_BODY_POSITION | None = None, 18:16:19 preload_content: bool = True, 18:16:19 decode_content: bool = True, 18:16:19 **response_kw: typing.Any, 18:16:19 ) -> BaseHTTPResponse: 18:16:19 """ 18:16:19 Get a connection from the pool and perform an HTTP request. This is the 18:16:19 lowest level call for making a request, so you'll need to specify all 18:16:19 the raw details. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 More commonly, it's appropriate to use a convenience method 18:16:19 such as :meth:`request`. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 `release_conn` will only behave as expected if 18:16:19 `preload_content=False` because we want to make 18:16:19 `preload_content=False` the default behaviour someday soon without 18:16:19 breaking backwards compatibility. 18:16:19 18:16:19 :param method: 18:16:19 HTTP request method (such as GET, POST, PUT, etc.) 18:16:19 18:16:19 :param url: 18:16:19 The URL to perform the request on. 18:16:19 18:16:19 :param body: 18:16:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 18:16:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 18:16:19 18:16:19 :param headers: 18:16:19 Dictionary of custom headers to send, such as User-Agent, 18:16:19 If-None-Match, etc. If None, pool headers are used. If provided, 18:16:19 these headers completely replace any pool-specific headers. 18:16:19 18:16:19 :param retries: 18:16:19 Configure the number of retries to allow before raising a 18:16:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 18:16:19 18:16:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 18:16:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 18:16:19 over different types of retries. 18:16:19 Pass an integer number to retry connection errors that many times, 18:16:19 but no other types of errors. Pass zero to never retry. 18:16:19 18:16:19 If ``False``, then retries are disabled and any exception is raised 18:16:19 immediately. Also, instead of raising a MaxRetryError on redirects, 18:16:19 the redirect response will be returned. 18:16:19 18:16:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 18:16:19 18:16:19 :param redirect: 18:16:19 If True, automatically handle redirects (status codes 301, 302, 18:16:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 18:16:19 will disable redirect, too. 18:16:19 18:16:19 :param assert_same_host: 18:16:19 If ``True``, will make sure that the host of the pool requests is 18:16:19 consistent else will raise HostChangedError. When ``False``, you can 18:16:19 use the pool on an HTTP proxy and request foreign hosts. 18:16:19 18:16:19 :param timeout: 18:16:19 If specified, overrides the default timeout for this one 18:16:19 request. It may be a float (in seconds) or an instance of 18:16:19 :class:`urllib3.util.Timeout`. 18:16:19 18:16:19 :param pool_timeout: 18:16:19 If set and the pool is set to block=True, then this method will 18:16:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 18:16:19 connection is available within the time period. 18:16:19 18:16:19 :param bool preload_content: 18:16:19 If True, the response's body will be preloaded into memory. 18:16:19 18:16:19 :param bool decode_content: 18:16:19 If True, will attempt to decode the body based on the 18:16:19 'content-encoding' header. 18:16:19 18:16:19 :param release_conn: 18:16:19 If False, then the urlopen call will not release the connection 18:16:19 back into the pool once a response is received (but will release if 18:16:19 you read the entire contents of the response such as when 18:16:19 `preload_content=True`). This is useful if you're not preloading 18:16:19 the response's content immediately. You will need to call 18:16:19 ``r.release_conn()`` on the response ``r`` to return the connection 18:16:19 back into the pool. If None, it takes the value of ``preload_content`` 18:16:19 which defaults to ``True``. 18:16:19 18:16:19 :param bool chunked: 18:16:19 If True, urllib3 will send the body using chunked transfer 18:16:19 encoding. Otherwise, urllib3 will send the body using the standard 18:16:19 content-length form. Defaults to False. 18:16:19 18:16:19 :param int body_pos: 18:16:19 Position to seek to in file-like body in the event of a retry or 18:16:19 redirect. Typically this won't need to be set because urllib3 will 18:16:19 auto-populate the value when needed. 18:16:19 """ 18:16:19 parsed_url = parse_url(url) 18:16:19 destination_scheme = parsed_url.scheme 18:16:19 18:16:19 if headers is None: 18:16:19 headers = self.headers 18:16:19 18:16:19 if not isinstance(retries, Retry): 18:16:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 18:16:19 18:16:19 if release_conn is None: 18:16:19 release_conn = preload_content 18:16:19 18:16:19 # Check host 18:16:19 if assert_same_host and not self.is_same_host(url): 18:16:19 raise HostChangedError(self, url, retries) 18:16:19 18:16:19 # Ensure that the URL we're connecting to is properly encoded 18:16:19 if url.startswith("/"): 18:16:19 url = to_str(_encode_target(url)) 18:16:19 else: 18:16:19 url = to_str(parsed_url.url) 18:16:19 18:16:19 conn = None 18:16:19 18:16:19 # Track whether `conn` needs to be released before 18:16:19 # returning/raising/recursing. Update this variable if necessary, and 18:16:19 # leave `release_conn` constant throughout the function. That way, if 18:16:19 # the function recurses, the original value of `release_conn` will be 18:16:19 # passed down into the recursive call, and its value will be respected. 18:16:19 # 18:16:19 # See issue #651 [1] for details. 18:16:19 # 18:16:19 # [1] 18:16:19 release_this_conn = release_conn 18:16:19 18:16:19 http_tunnel_required = connection_requires_http_tunnel( 18:16:19 self.proxy, self.proxy_config, destination_scheme 18:16:19 ) 18:16:19 18:16:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 18:16:19 # have to copy the headers dict so we can safely change it without those 18:16:19 # changes being reflected in anyone else's copy. 18:16:19 if not http_tunnel_required: 18:16:19 headers = headers.copy() # type: ignore[attr-defined] 18:16:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 18:16:19 18:16:19 # Must keep the exception bound to a separate variable or else Python 3 18:16:19 # complains about UnboundLocalError. 18:16:19 err = None 18:16:19 18:16:19 # Keep track of whether we cleanly exited the except block. This 18:16:19 # ensures we do proper cleanup in finally. 18:16:19 clean_exit = False 18:16:19 18:16:19 # Rewind body position, if needed. Record current position 18:16:19 # for future rewinds in the event of a redirect/retry. 18:16:19 body_pos = set_file_position(body, body_pos) 18:16:19 18:16:19 try: 18:16:19 # Request a connection from the queue. 18:16:19 timeout_obj = self._get_timeout(timeout) 18:16:19 conn = self._get_conn(timeout=pool_timeout) 18:16:19 18:16:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 18:16:19 18:16:19 # Is this a closed/new connection that requires CONNECT tunnelling? 18:16:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 18:16:19 try: 18:16:19 self._prepare_proxy(conn) 18:16:19 except (BaseSSLError, OSError, SocketTimeout) as e: 18:16:19 self._raise_timeout( 18:16:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 18:16:19 ) 18:16:19 raise 18:16:19 18:16:19 # If we're going to release the connection in ``finally:``, then 18:16:19 # the response doesn't need to know about the connection. Otherwise 18:16:19 # it will also try to release it and we'll have a double-release 18:16:19 # mess. 18:16:19 response_conn = conn if not release_conn else None 18:16:19 18:16:19 # Make the request on the HTTPConnection object 18:16:19 > response = self._make_request( 18:16:19 conn, 18:16:19 method, 18:16:19 url, 18:16:19 timeout=timeout_obj, 18:16:19 body=body, 18:16:19 headers=headers, 18:16:19 chunked=chunked, 18:16:19 retries=retries, 18:16:19 response_conn=response_conn, 18:16:19 preload_content=preload_content, 18:16:19 decode_content=decode_content, 18:16:19 **response_kw, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 18:16:19 conn.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 18:16:19 self.endheaders() 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 18:16:19 self._send_output(message_body, encode_chunked=encode_chunked) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 18:16:19 self.send(msg) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 18:16:19 self.connect() 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 18:16:19 self.sock = self._new_conn() 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 except socket.gaierror as e: 18:16:19 raise NameResolutionError(self.host, self, e) from e 18:16:19 except SocketTimeout as e: 18:16:19 raise ConnectTimeoutError( 18:16:19 self, 18:16:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 18:16:19 ) from e 18:16:19 18:16:19 except OSError as e: 18:16:19 > raise NewConnectionError( 18:16:19 self, f"Failed to establish a new connection: {e}" 18:16:19 ) from e 18:16:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 > resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 18:16:19 retries = retries.increment( 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 method = 'GET' 18:16:19 url = '/rests/data/ietf-network:networks/network=openroadm-topology?content=config' 18:16:19 response = None 18:16:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 18:16:19 _pool = 18:16:19 _stacktrace = 18:16:19 18:16:19 def increment( 18:16:19 self, 18:16:19 method: str | None = None, 18:16:19 url: str | None = None, 18:16:19 response: BaseHTTPResponse | None = None, 18:16:19 error: Exception | None = None, 18:16:19 _pool: ConnectionPool | None = None, 18:16:19 _stacktrace: TracebackType | None = None, 18:16:19 ) -> Self: 18:16:19 """Return a new Retry object with incremented retry counters. 18:16:19 18:16:19 :param response: A response object, or None, if the server did not 18:16:19 return a response. 18:16:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 18:16:19 :param Exception error: An error encountered during the request, or 18:16:19 None if the response was received successfully. 18:16:19 18:16:19 :return: A new ``Retry`` object. 18:16:19 """ 18:16:19 if self.total is False and error: 18:16:19 # Disabled, indicate to re-raise the error. 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 18:16:19 total = self.total 18:16:19 if total is not None: 18:16:19 total -= 1 18:16:19 18:16:19 connect = self.connect 18:16:19 read = self.read 18:16:19 redirect = self.redirect 18:16:19 status_count = self.status 18:16:19 other = self.other 18:16:19 cause = "unknown" 18:16:19 status = None 18:16:19 redirect_location = None 18:16:19 18:16:19 if error and self._is_connection_error(error): 18:16:19 # Connect retry? 18:16:19 if connect is False: 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif connect is not None: 18:16:19 connect -= 1 18:16:19 18:16:19 elif error and self._is_read_error(error): 18:16:19 # Read retry? 18:16:19 if read is False or method is None or not self._is_method_retryable(method): 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif read is not None: 18:16:19 read -= 1 18:16:19 18:16:19 elif error: 18:16:19 # Other retry? 18:16:19 if other is not None: 18:16:19 other -= 1 18:16:19 18:16:19 elif response and response.get_redirect_location(): 18:16:19 # Redirect retry? 18:16:19 if redirect is not None: 18:16:19 redirect -= 1 18:16:19 cause = "too many redirects" 18:16:19 response_redirect_location = response.get_redirect_location() 18:16:19 if response_redirect_location: 18:16:19 redirect_location = response_redirect_location 18:16:19 status = response.status 18:16:19 18:16:19 else: 18:16:19 # Incrementing because of a server error like a 500 in 18:16:19 # status_forcelist and the given method is in the allowed_methods 18:16:19 cause = ResponseError.GENERIC_ERROR 18:16:19 if response and response.status: 18:16:19 if status_count is not None: 18:16:19 status_count -= 1 18:16:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 18:16:19 status = response.status 18:16:19 18:16:19 history = self.history + ( 18:16:19 RequestHistory(method, url, error, status, redirect_location), 18:16:19 ) 18:16:19 18:16:19 new_retry = self.new( 18:16:19 total=total, 18:16:19 connect=connect, 18:16:19 read=read, 18:16:19 redirect=redirect, 18:16:19 status=status_count, 18:16:19 other=other, 18:16:19 history=history, 18:16:19 ) 18:16:19 18:16:19 if new_retry.is_exhausted(): 18:16:19 reason = error or ResponseError(cause) 18:16:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 18:16:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-topology?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 18:16:19 18:16:19 During handling of the above exception, another exception occurred: 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_28_verifyOppositeLinkTopology(self): 18:16:19 > response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:578: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 transportpce_tests/common/test_utils.py:495: in get_ietf_network_request 18:16:19 response = get_request(url[RESTCONF_VERSION].format(*format_args)) 18:16:19 transportpce_tests/common/test_utils.py:116: in get_request 18:16:19 return requests.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 18:16:19 return session.request(method=method, url=url, **kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 18:16:19 resp = self.send(prep, **send_kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 18:16:19 r = adapter.send(request, **kwargs) 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 except (ProtocolError, OSError) as err: 18:16:19 raise ConnectionError(err, request=request) 18:16:19 18:16:19 except MaxRetryError as e: 18:16:19 if isinstance(e.reason, ConnectTimeoutError): 18:16:19 # TODO: Remove this in 3.0.0: see #2811 18:16:19 if not isinstance(e.reason, NewConnectionError): 18:16:19 raise ConnectTimeout(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, ResponseError): 18:16:19 raise RetryError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _ProxyError): 18:16:19 raise ProxyError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _SSLError): 18:16:19 # This branch is for urllib3 v1.22 and later. 18:16:19 raise SSLError(e, request=request) 18:16:19 18:16:19 > raise ConnectionError(e, request=request) 18:16:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-topology?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 18:16:19 __ TransportPCETopologyTesting.test_29_getLinkOmsAttributesOpenRoadmTopology ___ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 > sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 18:16:19 raise err 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 address = ('localhost', 8182), timeout = 10, source_address = None 18:16:19 socket_options = [(6, 1, 1)] 18:16:19 18:16:19 def create_connection( 18:16:19 address: tuple[str, int], 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 source_address: tuple[str, int] | None = None, 18:16:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 18:16:19 ) -> socket.socket: 18:16:19 """Connect to *address* and return the socket object. 18:16:19 18:16:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 18:16:19 port)``) and return the socket object. Passing the optional 18:16:19 *timeout* parameter will set the timeout on the socket instance 18:16:19 before attempting to connect. If no *timeout* is supplied, the 18:16:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 18:16:19 is used. If *source_address* is set it must be a tuple of (host, port) 18:16:19 for the socket to bind as a source address before making the connection. 18:16:19 An host of '' or port 0 tells the OS to use the default. 18:16:19 """ 18:16:19 18:16:19 host, port = address 18:16:19 if host.startswith("["): 18:16:19 host = host.strip("[]") 18:16:19 err = None 18:16:19 18:16:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 18:16:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 18:16:19 # The original create_connection function always returns all records. 18:16:19 family = allowed_gai_family() 18:16:19 18:16:19 try: 18:16:19 host.encode("idna") 18:16:19 except UnicodeError: 18:16:19 raise LocationParseError(f"'{host}', label empty or too long") from None 18:16:19 18:16:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 18:16:19 af, socktype, proto, canonname, sa = res 18:16:19 sock = None 18:16:19 try: 18:16:19 sock = socket.socket(af, socktype, proto) 18:16:19 18:16:19 # If provided, set socket level options before connecting. 18:16:19 _set_socket_options(sock, socket_options) 18:16:19 18:16:19 if timeout is not _DEFAULT_TIMEOUT: 18:16:19 sock.settimeout(timeout) 18:16:19 if source_address: 18:16:19 sock.bind(source_address) 18:16:19 > sock.connect(sa) 18:16:19 E ConnectionRefusedError: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 method = 'GET' 18:16:19 url = '/rests/data/ietf-network:networks/network=openroadm-topology?content=config' 18:16:19 body = None 18:16:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 18:16:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 redirect = False, assert_same_host = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 18:16:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 18:16:19 decode_content = False, response_kw = {} 18:16:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/ietf-network:networks/network=openroadm-topology', query='content=config', fragment=None) 18:16:19 destination_scheme = None, conn = None, release_this_conn = True 18:16:19 http_tunnel_required = False, err = None, clean_exit = False 18:16:19 18:16:19 def urlopen( # type: ignore[override] 18:16:19 self, 18:16:19 method: str, 18:16:19 url: str, 18:16:19 body: _TYPE_BODY | None = None, 18:16:19 headers: typing.Mapping[str, str] | None = None, 18:16:19 retries: Retry | bool | int | None = None, 18:16:19 redirect: bool = True, 18:16:19 assert_same_host: bool = True, 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 pool_timeout: int | None = None, 18:16:19 release_conn: bool | None = None, 18:16:19 chunked: bool = False, 18:16:19 body_pos: _TYPE_BODY_POSITION | None = None, 18:16:19 preload_content: bool = True, 18:16:19 decode_content: bool = True, 18:16:19 **response_kw: typing.Any, 18:16:19 ) -> BaseHTTPResponse: 18:16:19 """ 18:16:19 Get a connection from the pool and perform an HTTP request. This is the 18:16:19 lowest level call for making a request, so you'll need to specify all 18:16:19 the raw details. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 More commonly, it's appropriate to use a convenience method 18:16:19 such as :meth:`request`. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 `release_conn` will only behave as expected if 18:16:19 `preload_content=False` because we want to make 18:16:19 `preload_content=False` the default behaviour someday soon without 18:16:19 breaking backwards compatibility. 18:16:19 18:16:19 :param method: 18:16:19 HTTP request method (such as GET, POST, PUT, etc.) 18:16:19 18:16:19 :param url: 18:16:19 The URL to perform the request on. 18:16:19 18:16:19 :param body: 18:16:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 18:16:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 18:16:19 18:16:19 :param headers: 18:16:19 Dictionary of custom headers to send, such as User-Agent, 18:16:19 If-None-Match, etc. If None, pool headers are used. If provided, 18:16:19 these headers completely replace any pool-specific headers. 18:16:19 18:16:19 :param retries: 18:16:19 Configure the number of retries to allow before raising a 18:16:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 18:16:19 18:16:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 18:16:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 18:16:19 over different types of retries. 18:16:19 Pass an integer number to retry connection errors that many times, 18:16:19 but no other types of errors. Pass zero to never retry. 18:16:19 18:16:19 If ``False``, then retries are disabled and any exception is raised 18:16:19 immediately. Also, instead of raising a MaxRetryError on redirects, 18:16:19 the redirect response will be returned. 18:16:19 18:16:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 18:16:19 18:16:19 :param redirect: 18:16:19 If True, automatically handle redirects (status codes 301, 302, 18:16:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 18:16:19 will disable redirect, too. 18:16:19 18:16:19 :param assert_same_host: 18:16:19 If ``True``, will make sure that the host of the pool requests is 18:16:19 consistent else will raise HostChangedError. When ``False``, you can 18:16:19 use the pool on an HTTP proxy and request foreign hosts. 18:16:19 18:16:19 :param timeout: 18:16:19 If specified, overrides the default timeout for this one 18:16:19 request. It may be a float (in seconds) or an instance of 18:16:19 :class:`urllib3.util.Timeout`. 18:16:19 18:16:19 :param pool_timeout: 18:16:19 If set and the pool is set to block=True, then this method will 18:16:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 18:16:19 connection is available within the time period. 18:16:19 18:16:19 :param bool preload_content: 18:16:19 If True, the response's body will be preloaded into memory. 18:16:19 18:16:19 :param bool decode_content: 18:16:19 If True, will attempt to decode the body based on the 18:16:19 'content-encoding' header. 18:16:19 18:16:19 :param release_conn: 18:16:19 If False, then the urlopen call will not release the connection 18:16:19 back into the pool once a response is received (but will release if 18:16:19 you read the entire contents of the response such as when 18:16:19 `preload_content=True`). This is useful if you're not preloading 18:16:19 the response's content immediately. You will need to call 18:16:19 ``r.release_conn()`` on the response ``r`` to return the connection 18:16:19 back into the pool. If None, it takes the value of ``preload_content`` 18:16:19 which defaults to ``True``. 18:16:19 18:16:19 :param bool chunked: 18:16:19 If True, urllib3 will send the body using chunked transfer 18:16:19 encoding. Otherwise, urllib3 will send the body using the standard 18:16:19 content-length form. Defaults to False. 18:16:19 18:16:19 :param int body_pos: 18:16:19 Position to seek to in file-like body in the event of a retry or 18:16:19 redirect. Typically this won't need to be set because urllib3 will 18:16:19 auto-populate the value when needed. 18:16:19 """ 18:16:19 parsed_url = parse_url(url) 18:16:19 destination_scheme = parsed_url.scheme 18:16:19 18:16:19 if headers is None: 18:16:19 headers = self.headers 18:16:19 18:16:19 if not isinstance(retries, Retry): 18:16:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 18:16:19 18:16:19 if release_conn is None: 18:16:19 release_conn = preload_content 18:16:19 18:16:19 # Check host 18:16:19 if assert_same_host and not self.is_same_host(url): 18:16:19 raise HostChangedError(self, url, retries) 18:16:19 18:16:19 # Ensure that the URL we're connecting to is properly encoded 18:16:19 if url.startswith("/"): 18:16:19 url = to_str(_encode_target(url)) 18:16:19 else: 18:16:19 url = to_str(parsed_url.url) 18:16:19 18:16:19 conn = None 18:16:19 18:16:19 # Track whether `conn` needs to be released before 18:16:19 # returning/raising/recursing. Update this variable if necessary, and 18:16:19 # leave `release_conn` constant throughout the function. That way, if 18:16:19 # the function recurses, the original value of `release_conn` will be 18:16:19 # passed down into the recursive call, and its value will be respected. 18:16:19 # 18:16:19 # See issue #651 [1] for details. 18:16:19 # 18:16:19 # [1] 18:16:19 release_this_conn = release_conn 18:16:19 18:16:19 http_tunnel_required = connection_requires_http_tunnel( 18:16:19 self.proxy, self.proxy_config, destination_scheme 18:16:19 ) 18:16:19 18:16:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 18:16:19 # have to copy the headers dict so we can safely change it without those 18:16:19 # changes being reflected in anyone else's copy. 18:16:19 if not http_tunnel_required: 18:16:19 headers = headers.copy() # type: ignore[attr-defined] 18:16:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 18:16:19 18:16:19 # Must keep the exception bound to a separate variable or else Python 3 18:16:19 # complains about UnboundLocalError. 18:16:19 err = None 18:16:19 18:16:19 # Keep track of whether we cleanly exited the except block. This 18:16:19 # ensures we do proper cleanup in finally. 18:16:19 clean_exit = False 18:16:19 18:16:19 # Rewind body position, if needed. Record current position 18:16:19 # for future rewinds in the event of a redirect/retry. 18:16:19 body_pos = set_file_position(body, body_pos) 18:16:19 18:16:19 try: 18:16:19 # Request a connection from the queue. 18:16:19 timeout_obj = self._get_timeout(timeout) 18:16:19 conn = self._get_conn(timeout=pool_timeout) 18:16:19 18:16:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 18:16:19 18:16:19 # Is this a closed/new connection that requires CONNECT tunnelling? 18:16:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 18:16:19 try: 18:16:19 self._prepare_proxy(conn) 18:16:19 except (BaseSSLError, OSError, SocketTimeout) as e: 18:16:19 self._raise_timeout( 18:16:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 18:16:19 ) 18:16:19 raise 18:16:19 18:16:19 # If we're going to release the connection in ``finally:``, then 18:16:19 # the response doesn't need to know about the connection. Otherwise 18:16:19 # it will also try to release it and we'll have a double-release 18:16:19 # mess. 18:16:19 response_conn = conn if not release_conn else None 18:16:19 18:16:19 # Make the request on the HTTPConnection object 18:16:19 > response = self._make_request( 18:16:19 conn, 18:16:19 method, 18:16:19 url, 18:16:19 timeout=timeout_obj, 18:16:19 body=body, 18:16:19 headers=headers, 18:16:19 chunked=chunked, 18:16:19 retries=retries, 18:16:19 response_conn=response_conn, 18:16:19 preload_content=preload_content, 18:16:19 decode_content=decode_content, 18:16:19 **response_kw, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 18:16:19 conn.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 18:16:19 self.endheaders() 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 18:16:19 self._send_output(message_body, encode_chunked=encode_chunked) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 18:16:19 self.send(msg) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 18:16:19 self.connect() 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 18:16:19 self.sock = self._new_conn() 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 except socket.gaierror as e: 18:16:19 raise NameResolutionError(self.host, self, e) from e 18:16:19 except SocketTimeout as e: 18:16:19 raise ConnectTimeoutError( 18:16:19 self, 18:16:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 18:16:19 ) from e 18:16:19 18:16:19 except OSError as e: 18:16:19 > raise NewConnectionError( 18:16:19 self, f"Failed to establish a new connection: {e}" 18:16:19 ) from e 18:16:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 > resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 18:16:19 retries = retries.increment( 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 method = 'GET' 18:16:19 url = '/rests/data/ietf-network:networks/network=openroadm-topology?content=config' 18:16:19 response = None 18:16:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 18:16:19 _pool = 18:16:19 _stacktrace = 18:16:19 18:16:19 def increment( 18:16:19 self, 18:16:19 method: str | None = None, 18:16:19 url: str | None = None, 18:16:19 response: BaseHTTPResponse | None = None, 18:16:19 error: Exception | None = None, 18:16:19 _pool: ConnectionPool | None = None, 18:16:19 _stacktrace: TracebackType | None = None, 18:16:19 ) -> Self: 18:16:19 """Return a new Retry object with incremented retry counters. 18:16:19 18:16:19 :param response: A response object, or None, if the server did not 18:16:19 return a response. 18:16:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 18:16:19 :param Exception error: An error encountered during the request, or 18:16:19 None if the response was received successfully. 18:16:19 18:16:19 :return: A new ``Retry`` object. 18:16:19 """ 18:16:19 if self.total is False and error: 18:16:19 # Disabled, indicate to re-raise the error. 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 18:16:19 total = self.total 18:16:19 if total is not None: 18:16:19 total -= 1 18:16:19 18:16:19 connect = self.connect 18:16:19 read = self.read 18:16:19 redirect = self.redirect 18:16:19 status_count = self.status 18:16:19 other = self.other 18:16:19 cause = "unknown" 18:16:19 status = None 18:16:19 redirect_location = None 18:16:19 18:16:19 if error and self._is_connection_error(error): 18:16:19 # Connect retry? 18:16:19 if connect is False: 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif connect is not None: 18:16:19 connect -= 1 18:16:19 18:16:19 elif error and self._is_read_error(error): 18:16:19 # Read retry? 18:16:19 if read is False or method is None or not self._is_method_retryable(method): 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif read is not None: 18:16:19 read -= 1 18:16:19 18:16:19 elif error: 18:16:19 # Other retry? 18:16:19 if other is not None: 18:16:19 other -= 1 18:16:19 18:16:19 elif response and response.get_redirect_location(): 18:16:19 # Redirect retry? 18:16:19 if redirect is not None: 18:16:19 redirect -= 1 18:16:19 cause = "too many redirects" 18:16:19 response_redirect_location = response.get_redirect_location() 18:16:19 if response_redirect_location: 18:16:19 redirect_location = response_redirect_location 18:16:19 status = response.status 18:16:19 18:16:19 else: 18:16:19 # Incrementing because of a server error like a 500 in 18:16:19 # status_forcelist and the given method is in the allowed_methods 18:16:19 cause = ResponseError.GENERIC_ERROR 18:16:19 if response and response.status: 18:16:19 if status_count is not None: 18:16:19 status_count -= 1 18:16:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 18:16:19 status = response.status 18:16:19 18:16:19 history = self.history + ( 18:16:19 RequestHistory(method, url, error, status, redirect_location), 18:16:19 ) 18:16:19 18:16:19 new_retry = self.new( 18:16:19 total=total, 18:16:19 connect=connect, 18:16:19 read=read, 18:16:19 redirect=redirect, 18:16:19 status=status_count, 18:16:19 other=other, 18:16:19 history=history, 18:16:19 ) 18:16:19 18:16:19 if new_retry.is_exhausted(): 18:16:19 reason = error or ResponseError(cause) 18:16:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 18:16:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-topology?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 18:16:19 18:16:19 During handling of the above exception, another exception occurred: 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_29_getLinkOmsAttributesOpenRoadmTopology(self): 18:16:19 > response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:601: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 transportpce_tests/common/test_utils.py:495: in get_ietf_network_request 18:16:19 response = get_request(url[RESTCONF_VERSION].format(*format_args)) 18:16:19 transportpce_tests/common/test_utils.py:116: in get_request 18:16:19 return requests.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 18:16:19 return session.request(method=method, url=url, **kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 18:16:19 resp = self.send(prep, **send_kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 18:16:19 r = adapter.send(request, **kwargs) 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 except (ProtocolError, OSError) as err: 18:16:19 raise ConnectionError(err, request=request) 18:16:19 18:16:19 except MaxRetryError as e: 18:16:19 if isinstance(e.reason, ConnectTimeoutError): 18:16:19 # TODO: Remove this in 3.0.0: see #2811 18:16:19 if not isinstance(e.reason, NewConnectionError): 18:16:19 raise ConnectTimeout(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, ResponseError): 18:16:19 raise RetryError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _ProxyError): 18:16:19 raise ProxyError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _SSLError): 18:16:19 # This branch is for urllib3 v1.22 and later. 18:16:19 raise SSLError(e, request=request) 18:16:19 18:16:19 > raise ConnectionError(e, request=request) 18:16:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-topology?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 18:16:19 ____________ TransportPCETopologyTesting.test_30_disconnect_ROADMB _____________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 > sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 18:16:19 raise err 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 address = ('localhost', 8182), timeout = 10, source_address = None 18:16:19 socket_options = [(6, 1, 1)] 18:16:19 18:16:19 def create_connection( 18:16:19 address: tuple[str, int], 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 source_address: tuple[str, int] | None = None, 18:16:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 18:16:19 ) -> socket.socket: 18:16:19 """Connect to *address* and return the socket object. 18:16:19 18:16:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 18:16:19 port)``) and return the socket object. Passing the optional 18:16:19 *timeout* parameter will set the timeout on the socket instance 18:16:19 before attempting to connect. If no *timeout* is supplied, the 18:16:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 18:16:19 is used. If *source_address* is set it must be a tuple of (host, port) 18:16:19 for the socket to bind as a source address before making the connection. 18:16:19 An host of '' or port 0 tells the OS to use the default. 18:16:19 """ 18:16:19 18:16:19 host, port = address 18:16:19 if host.startswith("["): 18:16:19 host = host.strip("[]") 18:16:19 err = None 18:16:19 18:16:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 18:16:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 18:16:19 # The original create_connection function always returns all records. 18:16:19 family = allowed_gai_family() 18:16:19 18:16:19 try: 18:16:19 host.encode("idna") 18:16:19 except UnicodeError: 18:16:19 raise LocationParseError(f"'{host}', label empty or too long") from None 18:16:19 18:16:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 18:16:19 af, socktype, proto, canonname, sa = res 18:16:19 sock = None 18:16:19 try: 18:16:19 sock = socket.socket(af, socktype, proto) 18:16:19 18:16:19 # If provided, set socket level options before connecting. 18:16:19 _set_socket_options(sock, socket_options) 18:16:19 18:16:19 if timeout is not _DEFAULT_TIMEOUT: 18:16:19 sock.settimeout(timeout) 18:16:19 if source_address: 18:16:19 sock.bind(source_address) 18:16:19 > sock.connect(sa) 18:16:19 E ConnectionRefusedError: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 method = 'DELETE' 18:16:19 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMB01' 18:16:19 body = None 18:16:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 18:16:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 redirect = False, assert_same_host = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 18:16:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 18:16:19 decode_content = False, response_kw = {} 18:16:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMB01', query=None, fragment=None) 18:16:19 destination_scheme = None, conn = None, release_this_conn = True 18:16:19 http_tunnel_required = False, err = None, clean_exit = False 18:16:19 18:16:19 def urlopen( # type: ignore[override] 18:16:19 self, 18:16:19 method: str, 18:16:19 url: str, 18:16:19 body: _TYPE_BODY | None = None, 18:16:19 headers: typing.Mapping[str, str] | None = None, 18:16:19 retries: Retry | bool | int | None = None, 18:16:19 redirect: bool = True, 18:16:19 assert_same_host: bool = True, 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 pool_timeout: int | None = None, 18:16:19 release_conn: bool | None = None, 18:16:19 chunked: bool = False, 18:16:19 body_pos: _TYPE_BODY_POSITION | None = None, 18:16:19 preload_content: bool = True, 18:16:19 decode_content: bool = True, 18:16:19 **response_kw: typing.Any, 18:16:19 ) -> BaseHTTPResponse: 18:16:19 """ 18:16:19 Get a connection from the pool and perform an HTTP request. This is the 18:16:19 lowest level call for making a request, so you'll need to specify all 18:16:19 the raw details. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 More commonly, it's appropriate to use a convenience method 18:16:19 such as :meth:`request`. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 `release_conn` will only behave as expected if 18:16:19 `preload_content=False` because we want to make 18:16:19 `preload_content=False` the default behaviour someday soon without 18:16:19 breaking backwards compatibility. 18:16:19 18:16:19 :param method: 18:16:19 HTTP request method (such as GET, POST, PUT, etc.) 18:16:19 18:16:19 :param url: 18:16:19 The URL to perform the request on. 18:16:19 18:16:19 :param body: 18:16:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 18:16:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 18:16:19 18:16:19 :param headers: 18:16:19 Dictionary of custom headers to send, such as User-Agent, 18:16:19 If-None-Match, etc. If None, pool headers are used. If provided, 18:16:19 these headers completely replace any pool-specific headers. 18:16:19 18:16:19 :param retries: 18:16:19 Configure the number of retries to allow before raising a 18:16:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 18:16:19 18:16:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 18:16:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 18:16:19 over different types of retries. 18:16:19 Pass an integer number to retry connection errors that many times, 18:16:19 but no other types of errors. Pass zero to never retry. 18:16:19 18:16:19 If ``False``, then retries are disabled and any exception is raised 18:16:19 immediately. Also, instead of raising a MaxRetryError on redirects, 18:16:19 the redirect response will be returned. 18:16:19 18:16:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 18:16:19 18:16:19 :param redirect: 18:16:19 If True, automatically handle redirects (status codes 301, 302, 18:16:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 18:16:19 will disable redirect, too. 18:16:19 18:16:19 :param assert_same_host: 18:16:19 If ``True``, will make sure that the host of the pool requests is 18:16:19 consistent else will raise HostChangedError. When ``False``, you can 18:16:19 use the pool on an HTTP proxy and request foreign hosts. 18:16:19 18:16:19 :param timeout: 18:16:19 If specified, overrides the default timeout for this one 18:16:19 request. It may be a float (in seconds) or an instance of 18:16:19 :class:`urllib3.util.Timeout`. 18:16:19 18:16:19 :param pool_timeout: 18:16:19 If set and the pool is set to block=True, then this method will 18:16:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 18:16:19 connection is available within the time period. 18:16:19 18:16:19 :param bool preload_content: 18:16:19 If True, the response's body will be preloaded into memory. 18:16:19 18:16:19 :param bool decode_content: 18:16:19 If True, will attempt to decode the body based on the 18:16:19 'content-encoding' header. 18:16:19 18:16:19 :param release_conn: 18:16:19 If False, then the urlopen call will not release the connection 18:16:19 back into the pool once a response is received (but will release if 18:16:19 you read the entire contents of the response such as when 18:16:19 `preload_content=True`). This is useful if you're not preloading 18:16:19 the response's content immediately. You will need to call 18:16:19 ``r.release_conn()`` on the response ``r`` to return the connection 18:16:19 back into the pool. If None, it takes the value of ``preload_content`` 18:16:19 which defaults to ``True``. 18:16:19 18:16:19 :param bool chunked: 18:16:19 If True, urllib3 will send the body using chunked transfer 18:16:19 encoding. Otherwise, urllib3 will send the body using the standard 18:16:19 content-length form. Defaults to False. 18:16:19 18:16:19 :param int body_pos: 18:16:19 Position to seek to in file-like body in the event of a retry or 18:16:19 redirect. Typically this won't need to be set because urllib3 will 18:16:19 auto-populate the value when needed. 18:16:19 """ 18:16:19 parsed_url = parse_url(url) 18:16:19 destination_scheme = parsed_url.scheme 18:16:19 18:16:19 if headers is None: 18:16:19 headers = self.headers 18:16:19 18:16:19 if not isinstance(retries, Retry): 18:16:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 18:16:19 18:16:19 if release_conn is None: 18:16:19 release_conn = preload_content 18:16:19 18:16:19 # Check host 18:16:19 if assert_same_host and not self.is_same_host(url): 18:16:19 raise HostChangedError(self, url, retries) 18:16:19 18:16:19 # Ensure that the URL we're connecting to is properly encoded 18:16:19 if url.startswith("/"): 18:16:19 url = to_str(_encode_target(url)) 18:16:19 else: 18:16:19 url = to_str(parsed_url.url) 18:16:19 18:16:19 conn = None 18:16:19 18:16:19 # Track whether `conn` needs to be released before 18:16:19 # returning/raising/recursing. Update this variable if necessary, and 18:16:19 # leave `release_conn` constant throughout the function. That way, if 18:16:19 # the function recurses, the original value of `release_conn` will be 18:16:19 # passed down into the recursive call, and its value will be respected. 18:16:19 # 18:16:19 # See issue #651 [1] for details. 18:16:19 # 18:16:19 # [1] 18:16:19 release_this_conn = release_conn 18:16:19 18:16:19 http_tunnel_required = connection_requires_http_tunnel( 18:16:19 self.proxy, self.proxy_config, destination_scheme 18:16:19 ) 18:16:19 18:16:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 18:16:19 # have to copy the headers dict so we can safely change it without those 18:16:19 # changes being reflected in anyone else's copy. 18:16:19 if not http_tunnel_required: 18:16:19 headers = headers.copy() # type: ignore[attr-defined] 18:16:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 18:16:19 18:16:19 # Must keep the exception bound to a separate variable or else Python 3 18:16:19 # complains about UnboundLocalError. 18:16:19 err = None 18:16:19 18:16:19 # Keep track of whether we cleanly exited the except block. This 18:16:19 # ensures we do proper cleanup in finally. 18:16:19 clean_exit = False 18:16:19 18:16:19 # Rewind body position, if needed. Record current position 18:16:19 # for future rewinds in the event of a redirect/retry. 18:16:19 body_pos = set_file_position(body, body_pos) 18:16:19 18:16:19 try: 18:16:19 # Request a connection from the queue. 18:16:19 timeout_obj = self._get_timeout(timeout) 18:16:19 conn = self._get_conn(timeout=pool_timeout) 18:16:19 18:16:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 18:16:19 18:16:19 # Is this a closed/new connection that requires CONNECT tunnelling? 18:16:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 18:16:19 try: 18:16:19 self._prepare_proxy(conn) 18:16:19 except (BaseSSLError, OSError, SocketTimeout) as e: 18:16:19 self._raise_timeout( 18:16:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 18:16:19 ) 18:16:19 raise 18:16:19 18:16:19 # If we're going to release the connection in ``finally:``, then 18:16:19 # the response doesn't need to know about the connection. Otherwise 18:16:19 # it will also try to release it and we'll have a double-release 18:16:19 # mess. 18:16:19 response_conn = conn if not release_conn else None 18:16:19 18:16:19 # Make the request on the HTTPConnection object 18:16:19 > response = self._make_request( 18:16:19 conn, 18:16:19 method, 18:16:19 url, 18:16:19 timeout=timeout_obj, 18:16:19 body=body, 18:16:19 headers=headers, 18:16:19 chunked=chunked, 18:16:19 retries=retries, 18:16:19 response_conn=response_conn, 18:16:19 preload_content=preload_content, 18:16:19 decode_content=decode_content, 18:16:19 **response_kw, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 18:16:19 conn.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 18:16:19 self.endheaders() 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 18:16:19 self._send_output(message_body, encode_chunked=encode_chunked) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 18:16:19 self.send(msg) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 18:16:19 self.connect() 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 18:16:19 self.sock = self._new_conn() 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 except socket.gaierror as e: 18:16:19 raise NameResolutionError(self.host, self, e) from e 18:16:19 except SocketTimeout as e: 18:16:19 raise ConnectTimeoutError( 18:16:19 self, 18:16:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 18:16:19 ) from e 18:16:19 18:16:19 except OSError as e: 18:16:19 > raise NewConnectionError( 18:16:19 self, f"Failed to establish a new connection: {e}" 18:16:19 ) from e 18:16:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 > resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 18:16:19 retries = retries.increment( 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 method = 'DELETE' 18:16:19 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMB01' 18:16:19 response = None 18:16:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 18:16:19 _pool = 18:16:19 _stacktrace = 18:16:19 18:16:19 def increment( 18:16:19 self, 18:16:19 method: str | None = None, 18:16:19 url: str | None = None, 18:16:19 response: BaseHTTPResponse | None = None, 18:16:19 error: Exception | None = None, 18:16:19 _pool: ConnectionPool | None = None, 18:16:19 _stacktrace: TracebackType | None = None, 18:16:19 ) -> Self: 18:16:19 """Return a new Retry object with incremented retry counters. 18:16:19 18:16:19 :param response: A response object, or None, if the server did not 18:16:19 return a response. 18:16:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 18:16:19 :param Exception error: An error encountered during the request, or 18:16:19 None if the response was received successfully. 18:16:19 18:16:19 :return: A new ``Retry`` object. 18:16:19 """ 18:16:19 if self.total is False and error: 18:16:19 # Disabled, indicate to re-raise the error. 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 18:16:19 total = self.total 18:16:19 if total is not None: 18:16:19 total -= 1 18:16:19 18:16:19 connect = self.connect 18:16:19 read = self.read 18:16:19 redirect = self.redirect 18:16:19 status_count = self.status 18:16:19 other = self.other 18:16:19 cause = "unknown" 18:16:19 status = None 18:16:19 redirect_location = None 18:16:19 18:16:19 if error and self._is_connection_error(error): 18:16:19 # Connect retry? 18:16:19 if connect is False: 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif connect is not None: 18:16:19 connect -= 1 18:16:19 18:16:19 elif error and self._is_read_error(error): 18:16:19 # Read retry? 18:16:19 if read is False or method is None or not self._is_method_retryable(method): 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif read is not None: 18:16:19 read -= 1 18:16:19 18:16:19 elif error: 18:16:19 # Other retry? 18:16:19 if other is not None: 18:16:19 other -= 1 18:16:19 18:16:19 elif response and response.get_redirect_location(): 18:16:19 # Redirect retry? 18:16:19 if redirect is not None: 18:16:19 redirect -= 1 18:16:19 cause = "too many redirects" 18:16:19 response_redirect_location = response.get_redirect_location() 18:16:19 if response_redirect_location: 18:16:19 redirect_location = response_redirect_location 18:16:19 status = response.status 18:16:19 18:16:19 else: 18:16:19 # Incrementing because of a server error like a 500 in 18:16:19 # status_forcelist and the given method is in the allowed_methods 18:16:19 cause = ResponseError.GENERIC_ERROR 18:16:19 if response and response.status: 18:16:19 if status_count is not None: 18:16:19 status_count -= 1 18:16:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 18:16:19 status = response.status 18:16:19 18:16:19 history = self.history + ( 18:16:19 RequestHistory(method, url, error, status, redirect_location), 18:16:19 ) 18:16:19 18:16:19 new_retry = self.new( 18:16:19 total=total, 18:16:19 connect=connect, 18:16:19 read=read, 18:16:19 redirect=redirect, 18:16:19 status=status_count, 18:16:19 other=other, 18:16:19 history=history, 18:16:19 ) 18:16:19 18:16:19 if new_retry.is_exhausted(): 18:16:19 reason = error or ResponseError(cause) 18:16:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 18:16:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMB01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 18:16:19 18:16:19 During handling of the above exception, another exception occurred: 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_30_disconnect_ROADMB(self): 18:16:19 # Delete in the topology-netconf 18:16:19 > response = test_utils.unmount_device("ROADMB01") 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:624: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 transportpce_tests/common/test_utils.py:358: in unmount_device 18:16:19 response = delete_request(url[RESTCONF_VERSION].format('{}', node)) 18:16:19 transportpce_tests/common/test_utils.py:133: in delete_request 18:16:19 return requests.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 18:16:19 return session.request(method=method, url=url, **kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 18:16:19 resp = self.send(prep, **send_kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 18:16:19 r = adapter.send(request, **kwargs) 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 except (ProtocolError, OSError) as err: 18:16:19 raise ConnectionError(err, request=request) 18:16:19 18:16:19 except MaxRetryError as e: 18:16:19 if isinstance(e.reason, ConnectTimeoutError): 18:16:19 # TODO: Remove this in 3.0.0: see #2811 18:16:19 if not isinstance(e.reason, NewConnectionError): 18:16:19 raise ConnectTimeout(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, ResponseError): 18:16:19 raise RetryError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _ProxyError): 18:16:19 raise ProxyError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _SSLError): 18:16:19 # This branch is for urllib3 v1.22 and later. 18:16:19 raise SSLError(e, request=request) 18:16:19 18:16:19 > raise ConnectionError(e, request=request) 18:16:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMB01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 18:16:19 ____________ TransportPCETopologyTesting.test_31_disconnect_ROADMC _____________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 > sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 18:16:19 raise err 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 address = ('localhost', 8182), timeout = 10, source_address = None 18:16:19 socket_options = [(6, 1, 1)] 18:16:19 18:16:19 def create_connection( 18:16:19 address: tuple[str, int], 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 source_address: tuple[str, int] | None = None, 18:16:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 18:16:19 ) -> socket.socket: 18:16:19 """Connect to *address* and return the socket object. 18:16:19 18:16:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 18:16:19 port)``) and return the socket object. Passing the optional 18:16:19 *timeout* parameter will set the timeout on the socket instance 18:16:19 before attempting to connect. If no *timeout* is supplied, the 18:16:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 18:16:19 is used. If *source_address* is set it must be a tuple of (host, port) 18:16:19 for the socket to bind as a source address before making the connection. 18:16:19 An host of '' or port 0 tells the OS to use the default. 18:16:19 """ 18:16:19 18:16:19 host, port = address 18:16:19 if host.startswith("["): 18:16:19 host = host.strip("[]") 18:16:19 err = None 18:16:19 18:16:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 18:16:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 18:16:19 # The original create_connection function always returns all records. 18:16:19 family = allowed_gai_family() 18:16:19 18:16:19 try: 18:16:19 host.encode("idna") 18:16:19 except UnicodeError: 18:16:19 raise LocationParseError(f"'{host}', label empty or too long") from None 18:16:19 18:16:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 18:16:19 af, socktype, proto, canonname, sa = res 18:16:19 sock = None 18:16:19 try: 18:16:19 sock = socket.socket(af, socktype, proto) 18:16:19 18:16:19 # If provided, set socket level options before connecting. 18:16:19 _set_socket_options(sock, socket_options) 18:16:19 18:16:19 if timeout is not _DEFAULT_TIMEOUT: 18:16:19 sock.settimeout(timeout) 18:16:19 if source_address: 18:16:19 sock.bind(source_address) 18:16:19 > sock.connect(sa) 18:16:19 E ConnectionRefusedError: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 method = 'DELETE' 18:16:19 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMC01' 18:16:19 body = None 18:16:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 18:16:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 redirect = False, assert_same_host = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 18:16:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 18:16:19 decode_content = False, response_kw = {} 18:16:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMC01', query=None, fragment=None) 18:16:19 destination_scheme = None, conn = None, release_this_conn = True 18:16:19 http_tunnel_required = False, err = None, clean_exit = False 18:16:19 18:16:19 def urlopen( # type: ignore[override] 18:16:19 self, 18:16:19 method: str, 18:16:19 url: str, 18:16:19 body: _TYPE_BODY | None = None, 18:16:19 headers: typing.Mapping[str, str] | None = None, 18:16:19 retries: Retry | bool | int | None = None, 18:16:19 redirect: bool = True, 18:16:19 assert_same_host: bool = True, 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 pool_timeout: int | None = None, 18:16:19 release_conn: bool | None = None, 18:16:19 chunked: bool = False, 18:16:19 body_pos: _TYPE_BODY_POSITION | None = None, 18:16:19 preload_content: bool = True, 18:16:19 decode_content: bool = True, 18:16:19 **response_kw: typing.Any, 18:16:19 ) -> BaseHTTPResponse: 18:16:19 """ 18:16:19 Get a connection from the pool and perform an HTTP request. This is the 18:16:19 lowest level call for making a request, so you'll need to specify all 18:16:19 the raw details. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 More commonly, it's appropriate to use a convenience method 18:16:19 such as :meth:`request`. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 `release_conn` will only behave as expected if 18:16:19 `preload_content=False` because we want to make 18:16:19 `preload_content=False` the default behaviour someday soon without 18:16:19 breaking backwards compatibility. 18:16:19 18:16:19 :param method: 18:16:19 HTTP request method (such as GET, POST, PUT, etc.) 18:16:19 18:16:19 :param url: 18:16:19 The URL to perform the request on. 18:16:19 18:16:19 :param body: 18:16:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 18:16:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 18:16:19 18:16:19 :param headers: 18:16:19 Dictionary of custom headers to send, such as User-Agent, 18:16:19 If-None-Match, etc. If None, pool headers are used. If provided, 18:16:19 these headers completely replace any pool-specific headers. 18:16:19 18:16:19 :param retries: 18:16:19 Configure the number of retries to allow before raising a 18:16:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 18:16:19 18:16:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 18:16:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 18:16:19 over different types of retries. 18:16:19 Pass an integer number to retry connection errors that many times, 18:16:19 but no other types of errors. Pass zero to never retry. 18:16:19 18:16:19 If ``False``, then retries are disabled and any exception is raised 18:16:19 immediately. Also, instead of raising a MaxRetryError on redirects, 18:16:19 the redirect response will be returned. 18:16:19 18:16:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 18:16:19 18:16:19 :param redirect: 18:16:19 If True, automatically handle redirects (status codes 301, 302, 18:16:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 18:16:19 will disable redirect, too. 18:16:19 18:16:19 :param assert_same_host: 18:16:19 If ``True``, will make sure that the host of the pool requests is 18:16:19 consistent else will raise HostChangedError. When ``False``, you can 18:16:19 use the pool on an HTTP proxy and request foreign hosts. 18:16:19 18:16:19 :param timeout: 18:16:19 If specified, overrides the default timeout for this one 18:16:19 request. It may be a float (in seconds) or an instance of 18:16:19 :class:`urllib3.util.Timeout`. 18:16:19 18:16:19 :param pool_timeout: 18:16:19 If set and the pool is set to block=True, then this method will 18:16:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 18:16:19 connection is available within the time period. 18:16:19 18:16:19 :param bool preload_content: 18:16:19 If True, the response's body will be preloaded into memory. 18:16:19 18:16:19 :param bool decode_content: 18:16:19 If True, will attempt to decode the body based on the 18:16:19 'content-encoding' header. 18:16:19 18:16:19 :param release_conn: 18:16:19 If False, then the urlopen call will not release the connection 18:16:19 back into the pool once a response is received (but will release if 18:16:19 you read the entire contents of the response such as when 18:16:19 `preload_content=True`). This is useful if you're not preloading 18:16:19 the response's content immediately. You will need to call 18:16:19 ``r.release_conn()`` on the response ``r`` to return the connection 18:16:19 back into the pool. If None, it takes the value of ``preload_content`` 18:16:19 which defaults to ``True``. 18:16:19 18:16:19 :param bool chunked: 18:16:19 If True, urllib3 will send the body using chunked transfer 18:16:19 encoding. Otherwise, urllib3 will send the body using the standard 18:16:19 content-length form. Defaults to False. 18:16:19 18:16:19 :param int body_pos: 18:16:19 Position to seek to in file-like body in the event of a retry or 18:16:19 redirect. Typically this won't need to be set because urllib3 will 18:16:19 auto-populate the value when needed. 18:16:19 """ 18:16:19 parsed_url = parse_url(url) 18:16:19 destination_scheme = parsed_url.scheme 18:16:19 18:16:19 if headers is None: 18:16:19 headers = self.headers 18:16:19 18:16:19 if not isinstance(retries, Retry): 18:16:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 18:16:19 18:16:19 if release_conn is None: 18:16:19 release_conn = preload_content 18:16:19 18:16:19 # Check host 18:16:19 if assert_same_host and not self.is_same_host(url): 18:16:19 raise HostChangedError(self, url, retries) 18:16:19 18:16:19 # Ensure that the URL we're connecting to is properly encoded 18:16:19 if url.startswith("/"): 18:16:19 url = to_str(_encode_target(url)) 18:16:19 else: 18:16:19 url = to_str(parsed_url.url) 18:16:19 18:16:19 conn = None 18:16:19 18:16:19 # Track whether `conn` needs to be released before 18:16:19 # returning/raising/recursing. Update this variable if necessary, and 18:16:19 # leave `release_conn` constant throughout the function. That way, if 18:16:19 # the function recurses, the original value of `release_conn` will be 18:16:19 # passed down into the recursive call, and its value will be respected. 18:16:19 # 18:16:19 # See issue #651 [1] for details. 18:16:19 # 18:16:19 # [1] 18:16:19 release_this_conn = release_conn 18:16:19 18:16:19 http_tunnel_required = connection_requires_http_tunnel( 18:16:19 self.proxy, self.proxy_config, destination_scheme 18:16:19 ) 18:16:19 18:16:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 18:16:19 # have to copy the headers dict so we can safely change it without those 18:16:19 # changes being reflected in anyone else's copy. 18:16:19 if not http_tunnel_required: 18:16:19 headers = headers.copy() # type: ignore[attr-defined] 18:16:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 18:16:19 18:16:19 # Must keep the exception bound to a separate variable or else Python 3 18:16:19 # complains about UnboundLocalError. 18:16:19 err = None 18:16:19 18:16:19 # Keep track of whether we cleanly exited the except block. This 18:16:19 # ensures we do proper cleanup in finally. 18:16:19 clean_exit = False 18:16:19 18:16:19 # Rewind body position, if needed. Record current position 18:16:19 # for future rewinds in the event of a redirect/retry. 18:16:19 body_pos = set_file_position(body, body_pos) 18:16:19 18:16:19 try: 18:16:19 # Request a connection from the queue. 18:16:19 timeout_obj = self._get_timeout(timeout) 18:16:19 conn = self._get_conn(timeout=pool_timeout) 18:16:19 18:16:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 18:16:19 18:16:19 # Is this a closed/new connection that requires CONNECT tunnelling? 18:16:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 18:16:19 try: 18:16:19 self._prepare_proxy(conn) 18:16:19 except (BaseSSLError, OSError, SocketTimeout) as e: 18:16:19 self._raise_timeout( 18:16:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 18:16:19 ) 18:16:19 raise 18:16:19 18:16:19 # If we're going to release the connection in ``finally:``, then 18:16:19 # the response doesn't need to know about the connection. Otherwise 18:16:19 # it will also try to release it and we'll have a double-release 18:16:19 # mess. 18:16:19 response_conn = conn if not release_conn else None 18:16:19 18:16:19 # Make the request on the HTTPConnection object 18:16:19 > response = self._make_request( 18:16:19 conn, 18:16:19 method, 18:16:19 url, 18:16:19 timeout=timeout_obj, 18:16:19 body=body, 18:16:19 headers=headers, 18:16:19 chunked=chunked, 18:16:19 retries=retries, 18:16:19 response_conn=response_conn, 18:16:19 preload_content=preload_content, 18:16:19 decode_content=decode_content, 18:16:19 **response_kw, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 18:16:19 conn.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 18:16:19 self.endheaders() 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 18:16:19 self._send_output(message_body, encode_chunked=encode_chunked) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 18:16:19 self.send(msg) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 18:16:19 self.connect() 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 18:16:19 self.sock = self._new_conn() 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 except socket.gaierror as e: 18:16:19 raise NameResolutionError(self.host, self, e) from e 18:16:19 except SocketTimeout as e: 18:16:19 raise ConnectTimeoutError( 18:16:19 self, 18:16:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 18:16:19 ) from e 18:16:19 18:16:19 except OSError as e: 18:16:19 > raise NewConnectionError( 18:16:19 self, f"Failed to establish a new connection: {e}" 18:16:19 ) from e 18:16:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 > resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 18:16:19 retries = retries.increment( 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 method = 'DELETE' 18:16:19 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMC01' 18:16:19 response = None 18:16:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 18:16:19 _pool = 18:16:19 _stacktrace = 18:16:19 18:16:19 def increment( 18:16:19 self, 18:16:19 method: str | None = None, 18:16:19 url: str | None = None, 18:16:19 response: BaseHTTPResponse | None = None, 18:16:19 error: Exception | None = None, 18:16:19 _pool: ConnectionPool | None = None, 18:16:19 _stacktrace: TracebackType | None = None, 18:16:19 ) -> Self: 18:16:19 """Return a new Retry object with incremented retry counters. 18:16:19 18:16:19 :param response: A response object, or None, if the server did not 18:16:19 return a response. 18:16:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 18:16:19 :param Exception error: An error encountered during the request, or 18:16:19 None if the response was received successfully. 18:16:19 18:16:19 :return: A new ``Retry`` object. 18:16:19 """ 18:16:19 if self.total is False and error: 18:16:19 # Disabled, indicate to re-raise the error. 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 18:16:19 total = self.total 18:16:19 if total is not None: 18:16:19 total -= 1 18:16:19 18:16:19 connect = self.connect 18:16:19 read = self.read 18:16:19 redirect = self.redirect 18:16:19 status_count = self.status 18:16:19 other = self.other 18:16:19 cause = "unknown" 18:16:19 status = None 18:16:19 redirect_location = None 18:16:19 18:16:19 if error and self._is_connection_error(error): 18:16:19 # Connect retry? 18:16:19 if connect is False: 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif connect is not None: 18:16:19 connect -= 1 18:16:19 18:16:19 elif error and self._is_read_error(error): 18:16:19 # Read retry? 18:16:19 if read is False or method is None or not self._is_method_retryable(method): 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif read is not None: 18:16:19 read -= 1 18:16:19 18:16:19 elif error: 18:16:19 # Other retry? 18:16:19 if other is not None: 18:16:19 other -= 1 18:16:19 18:16:19 elif response and response.get_redirect_location(): 18:16:19 # Redirect retry? 18:16:19 if redirect is not None: 18:16:19 redirect -= 1 18:16:19 cause = "too many redirects" 18:16:19 response_redirect_location = response.get_redirect_location() 18:16:19 if response_redirect_location: 18:16:19 redirect_location = response_redirect_location 18:16:19 status = response.status 18:16:19 18:16:19 else: 18:16:19 # Incrementing because of a server error like a 500 in 18:16:19 # status_forcelist and the given method is in the allowed_methods 18:16:19 cause = ResponseError.GENERIC_ERROR 18:16:19 if response and response.status: 18:16:19 if status_count is not None: 18:16:19 status_count -= 1 18:16:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 18:16:19 status = response.status 18:16:19 18:16:19 history = self.history + ( 18:16:19 RequestHistory(method, url, error, status, redirect_location), 18:16:19 ) 18:16:19 18:16:19 new_retry = self.new( 18:16:19 total=total, 18:16:19 connect=connect, 18:16:19 read=read, 18:16:19 redirect=redirect, 18:16:19 status=status_count, 18:16:19 other=other, 18:16:19 history=history, 18:16:19 ) 18:16:19 18:16:19 if new_retry.is_exhausted(): 18:16:19 reason = error or ResponseError(cause) 18:16:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 18:16:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMC01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 18:16:19 18:16:19 During handling of the above exception, another exception occurred: 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_31_disconnect_ROADMC(self): 18:16:19 > response = test_utils.unmount_device("ROADMC01") 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:631: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 transportpce_tests/common/test_utils.py:358: in unmount_device 18:16:19 response = delete_request(url[RESTCONF_VERSION].format('{}', node)) 18:16:19 transportpce_tests/common/test_utils.py:133: in delete_request 18:16:19 return requests.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 18:16:19 return session.request(method=method, url=url, **kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 18:16:19 resp = self.send(prep, **send_kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 18:16:19 r = adapter.send(request, **kwargs) 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 except (ProtocolError, OSError) as err: 18:16:19 raise ConnectionError(err, request=request) 18:16:19 18:16:19 except MaxRetryError as e: 18:16:19 if isinstance(e.reason, ConnectTimeoutError): 18:16:19 # TODO: Remove this in 3.0.0: see #2811 18:16:19 if not isinstance(e.reason, NewConnectionError): 18:16:19 raise ConnectTimeout(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, ResponseError): 18:16:19 raise RetryError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _ProxyError): 18:16:19 raise ProxyError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _SSLError): 18:16:19 # This branch is for urllib3 v1.22 and later. 18:16:19 raise SSLError(e, request=request) 18:16:19 18:16:19 > raise ConnectionError(e, request=request) 18:16:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMC01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 18:16:19 ________ TransportPCETopologyTesting.test_32_getNodes_OpenRoadmTopology ________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 > sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 18:16:19 raise err 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 address = ('localhost', 8182), timeout = 10, source_address = None 18:16:19 socket_options = [(6, 1, 1)] 18:16:19 18:16:19 def create_connection( 18:16:19 address: tuple[str, int], 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 source_address: tuple[str, int] | None = None, 18:16:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 18:16:19 ) -> socket.socket: 18:16:19 """Connect to *address* and return the socket object. 18:16:19 18:16:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 18:16:19 port)``) and return the socket object. Passing the optional 18:16:19 *timeout* parameter will set the timeout on the socket instance 18:16:19 before attempting to connect. If no *timeout* is supplied, the 18:16:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 18:16:19 is used. If *source_address* is set it must be a tuple of (host, port) 18:16:19 for the socket to bind as a source address before making the connection. 18:16:19 An host of '' or port 0 tells the OS to use the default. 18:16:19 """ 18:16:19 18:16:19 host, port = address 18:16:19 if host.startswith("["): 18:16:19 host = host.strip("[]") 18:16:19 err = None 18:16:19 18:16:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 18:16:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 18:16:19 # The original create_connection function always returns all records. 18:16:19 family = allowed_gai_family() 18:16:19 18:16:19 try: 18:16:19 host.encode("idna") 18:16:19 except UnicodeError: 18:16:19 raise LocationParseError(f"'{host}', label empty or too long") from None 18:16:19 18:16:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 18:16:19 af, socktype, proto, canonname, sa = res 18:16:19 sock = None 18:16:19 try: 18:16:19 sock = socket.socket(af, socktype, proto) 18:16:19 18:16:19 # If provided, set socket level options before connecting. 18:16:19 _set_socket_options(sock, socket_options) 18:16:19 18:16:19 if timeout is not _DEFAULT_TIMEOUT: 18:16:19 sock.settimeout(timeout) 18:16:19 if source_address: 18:16:19 sock.bind(source_address) 18:16:19 > sock.connect(sa) 18:16:19 E ConnectionRefusedError: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 method = 'GET' 18:16:19 url = '/rests/data/ietf-network:networks/network=openroadm-topology?content=config' 18:16:19 body = None 18:16:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 18:16:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 redirect = False, assert_same_host = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 18:16:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 18:16:19 decode_content = False, response_kw = {} 18:16:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/ietf-network:networks/network=openroadm-topology', query='content=config', fragment=None) 18:16:19 destination_scheme = None, conn = None, release_this_conn = True 18:16:19 http_tunnel_required = False, err = None, clean_exit = False 18:16:19 18:16:19 def urlopen( # type: ignore[override] 18:16:19 self, 18:16:19 method: str, 18:16:19 url: str, 18:16:19 body: _TYPE_BODY | None = None, 18:16:19 headers: typing.Mapping[str, str] | None = None, 18:16:19 retries: Retry | bool | int | None = None, 18:16:19 redirect: bool = True, 18:16:19 assert_same_host: bool = True, 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 pool_timeout: int | None = None, 18:16:19 release_conn: bool | None = None, 18:16:19 chunked: bool = False, 18:16:19 body_pos: _TYPE_BODY_POSITION | None = None, 18:16:19 preload_content: bool = True, 18:16:19 decode_content: bool = True, 18:16:19 **response_kw: typing.Any, 18:16:19 ) -> BaseHTTPResponse: 18:16:19 """ 18:16:19 Get a connection from the pool and perform an HTTP request. This is the 18:16:19 lowest level call for making a request, so you'll need to specify all 18:16:19 the raw details. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 More commonly, it's appropriate to use a convenience method 18:16:19 such as :meth:`request`. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 `release_conn` will only behave as expected if 18:16:19 `preload_content=False` because we want to make 18:16:19 `preload_content=False` the default behaviour someday soon without 18:16:19 breaking backwards compatibility. 18:16:19 18:16:19 :param method: 18:16:19 HTTP request method (such as GET, POST, PUT, etc.) 18:16:19 18:16:19 :param url: 18:16:19 The URL to perform the request on. 18:16:19 18:16:19 :param body: 18:16:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 18:16:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 18:16:19 18:16:19 :param headers: 18:16:19 Dictionary of custom headers to send, such as User-Agent, 18:16:19 If-None-Match, etc. If None, pool headers are used. If provided, 18:16:19 these headers completely replace any pool-specific headers. 18:16:19 18:16:19 :param retries: 18:16:19 Configure the number of retries to allow before raising a 18:16:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 18:16:19 18:16:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 18:16:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 18:16:19 over different types of retries. 18:16:19 Pass an integer number to retry connection errors that many times, 18:16:19 but no other types of errors. Pass zero to never retry. 18:16:19 18:16:19 If ``False``, then retries are disabled and any exception is raised 18:16:19 immediately. Also, instead of raising a MaxRetryError on redirects, 18:16:19 the redirect response will be returned. 18:16:19 18:16:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 18:16:19 18:16:19 :param redirect: 18:16:19 If True, automatically handle redirects (status codes 301, 302, 18:16:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 18:16:19 will disable redirect, too. 18:16:19 18:16:19 :param assert_same_host: 18:16:19 If ``True``, will make sure that the host of the pool requests is 18:16:19 consistent else will raise HostChangedError. When ``False``, you can 18:16:19 use the pool on an HTTP proxy and request foreign hosts. 18:16:19 18:16:19 :param timeout: 18:16:19 If specified, overrides the default timeout for this one 18:16:19 request. It may be a float (in seconds) or an instance of 18:16:19 :class:`urllib3.util.Timeout`. 18:16:19 18:16:19 :param pool_timeout: 18:16:19 If set and the pool is set to block=True, then this method will 18:16:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 18:16:19 connection is available within the time period. 18:16:19 18:16:19 :param bool preload_content: 18:16:19 If True, the response's body will be preloaded into memory. 18:16:19 18:16:19 :param bool decode_content: 18:16:19 If True, will attempt to decode the body based on the 18:16:19 'content-encoding' header. 18:16:19 18:16:19 :param release_conn: 18:16:19 If False, then the urlopen call will not release the connection 18:16:19 back into the pool once a response is received (but will release if 18:16:19 you read the entire contents of the response such as when 18:16:19 `preload_content=True`). This is useful if you're not preloading 18:16:19 the response's content immediately. You will need to call 18:16:19 ``r.release_conn()`` on the response ``r`` to return the connection 18:16:19 back into the pool. If None, it takes the value of ``preload_content`` 18:16:19 which defaults to ``True``. 18:16:19 18:16:19 :param bool chunked: 18:16:19 If True, urllib3 will send the body using chunked transfer 18:16:19 encoding. Otherwise, urllib3 will send the body using the standard 18:16:19 content-length form. Defaults to False. 18:16:19 18:16:19 :param int body_pos: 18:16:19 Position to seek to in file-like body in the event of a retry or 18:16:19 redirect. Typically this won't need to be set because urllib3 will 18:16:19 auto-populate the value when needed. 18:16:19 """ 18:16:19 parsed_url = parse_url(url) 18:16:19 destination_scheme = parsed_url.scheme 18:16:19 18:16:19 if headers is None: 18:16:19 headers = self.headers 18:16:19 18:16:19 if not isinstance(retries, Retry): 18:16:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 18:16:19 18:16:19 if release_conn is None: 18:16:19 release_conn = preload_content 18:16:19 18:16:19 # Check host 18:16:19 if assert_same_host and not self.is_same_host(url): 18:16:19 raise HostChangedError(self, url, retries) 18:16:19 18:16:19 # Ensure that the URL we're connecting to is properly encoded 18:16:19 if url.startswith("/"): 18:16:19 url = to_str(_encode_target(url)) 18:16:19 else: 18:16:19 url = to_str(parsed_url.url) 18:16:19 18:16:19 conn = None 18:16:19 18:16:19 # Track whether `conn` needs to be released before 18:16:19 # returning/raising/recursing. Update this variable if necessary, and 18:16:19 # leave `release_conn` constant throughout the function. That way, if 18:16:19 # the function recurses, the original value of `release_conn` will be 18:16:19 # passed down into the recursive call, and its value will be respected. 18:16:19 # 18:16:19 # See issue #651 [1] for details. 18:16:19 # 18:16:19 # [1] 18:16:19 release_this_conn = release_conn 18:16:19 18:16:19 http_tunnel_required = connection_requires_http_tunnel( 18:16:19 self.proxy, self.proxy_config, destination_scheme 18:16:19 ) 18:16:19 18:16:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 18:16:19 # have to copy the headers dict so we can safely change it without those 18:16:19 # changes being reflected in anyone else's copy. 18:16:19 if not http_tunnel_required: 18:16:19 headers = headers.copy() # type: ignore[attr-defined] 18:16:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 18:16:19 18:16:19 # Must keep the exception bound to a separate variable or else Python 3 18:16:19 # complains about UnboundLocalError. 18:16:19 err = None 18:16:19 18:16:19 # Keep track of whether we cleanly exited the except block. This 18:16:19 # ensures we do proper cleanup in finally. 18:16:19 clean_exit = False 18:16:19 18:16:19 # Rewind body position, if needed. Record current position 18:16:19 # for future rewinds in the event of a redirect/retry. 18:16:19 body_pos = set_file_position(body, body_pos) 18:16:19 18:16:19 try: 18:16:19 # Request a connection from the queue. 18:16:19 timeout_obj = self._get_timeout(timeout) 18:16:19 conn = self._get_conn(timeout=pool_timeout) 18:16:19 18:16:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 18:16:19 18:16:19 # Is this a closed/new connection that requires CONNECT tunnelling? 18:16:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 18:16:19 try: 18:16:19 self._prepare_proxy(conn) 18:16:19 except (BaseSSLError, OSError, SocketTimeout) as e: 18:16:19 self._raise_timeout( 18:16:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 18:16:19 ) 18:16:19 raise 18:16:19 18:16:19 # If we're going to release the connection in ``finally:``, then 18:16:19 # the response doesn't need to know about the connection. Otherwise 18:16:19 # it will also try to release it and we'll have a double-release 18:16:19 # mess. 18:16:19 response_conn = conn if not release_conn else None 18:16:19 18:16:19 # Make the request on the HTTPConnection object 18:16:19 > response = self._make_request( 18:16:19 conn, 18:16:19 method, 18:16:19 url, 18:16:19 timeout=timeout_obj, 18:16:19 body=body, 18:16:19 headers=headers, 18:16:19 chunked=chunked, 18:16:19 retries=retries, 18:16:19 response_conn=response_conn, 18:16:19 preload_content=preload_content, 18:16:19 decode_content=decode_content, 18:16:19 **response_kw, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 18:16:19 conn.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 18:16:19 self.endheaders() 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 18:16:19 self._send_output(message_body, encode_chunked=encode_chunked) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 18:16:19 self.send(msg) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 18:16:19 self.connect() 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 18:16:19 self.sock = self._new_conn() 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 except socket.gaierror as e: 18:16:19 raise NameResolutionError(self.host, self, e) from e 18:16:19 except SocketTimeout as e: 18:16:19 raise ConnectTimeoutError( 18:16:19 self, 18:16:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 18:16:19 ) from e 18:16:19 18:16:19 except OSError as e: 18:16:19 > raise NewConnectionError( 18:16:19 self, f"Failed to establish a new connection: {e}" 18:16:19 ) from e 18:16:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 > resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 18:16:19 retries = retries.increment( 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 method = 'GET' 18:16:19 url = '/rests/data/ietf-network:networks/network=openroadm-topology?content=config' 18:16:19 response = None 18:16:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 18:16:19 _pool = 18:16:19 _stacktrace = 18:16:19 18:16:19 def increment( 18:16:19 self, 18:16:19 method: str | None = None, 18:16:19 url: str | None = None, 18:16:19 response: BaseHTTPResponse | None = None, 18:16:19 error: Exception | None = None, 18:16:19 _pool: ConnectionPool | None = None, 18:16:19 _stacktrace: TracebackType | None = None, 18:16:19 ) -> Self: 18:16:19 """Return a new Retry object with incremented retry counters. 18:16:19 18:16:19 :param response: A response object, or None, if the server did not 18:16:19 return a response. 18:16:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 18:16:19 :param Exception error: An error encountered during the request, or 18:16:19 None if the response was received successfully. 18:16:19 18:16:19 :return: A new ``Retry`` object. 18:16:19 """ 18:16:19 if self.total is False and error: 18:16:19 # Disabled, indicate to re-raise the error. 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 18:16:19 total = self.total 18:16:19 if total is not None: 18:16:19 total -= 1 18:16:19 18:16:19 connect = self.connect 18:16:19 read = self.read 18:16:19 redirect = self.redirect 18:16:19 status_count = self.status 18:16:19 other = self.other 18:16:19 cause = "unknown" 18:16:19 status = None 18:16:19 redirect_location = None 18:16:19 18:16:19 if error and self._is_connection_error(error): 18:16:19 # Connect retry? 18:16:19 if connect is False: 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif connect is not None: 18:16:19 connect -= 1 18:16:19 18:16:19 elif error and self._is_read_error(error): 18:16:19 # Read retry? 18:16:19 if read is False or method is None or not self._is_method_retryable(method): 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif read is not None: 18:16:19 read -= 1 18:16:19 18:16:19 elif error: 18:16:19 # Other retry? 18:16:19 if other is not None: 18:16:19 other -= 1 18:16:19 18:16:19 elif response and response.get_redirect_location(): 18:16:19 # Redirect retry? 18:16:19 if redirect is not None: 18:16:19 redirect -= 1 18:16:19 cause = "too many redirects" 18:16:19 response_redirect_location = response.get_redirect_location() 18:16:19 if response_redirect_location: 18:16:19 redirect_location = response_redirect_location 18:16:19 status = response.status 18:16:19 18:16:19 else: 18:16:19 # Incrementing because of a server error like a 500 in 18:16:19 # status_forcelist and the given method is in the allowed_methods 18:16:19 cause = ResponseError.GENERIC_ERROR 18:16:19 if response and response.status: 18:16:19 if status_count is not None: 18:16:19 status_count -= 1 18:16:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 18:16:19 status = response.status 18:16:19 18:16:19 history = self.history + ( 18:16:19 RequestHistory(method, url, error, status, redirect_location), 18:16:19 ) 18:16:19 18:16:19 new_retry = self.new( 18:16:19 total=total, 18:16:19 connect=connect, 18:16:19 read=read, 18:16:19 redirect=redirect, 18:16:19 status=status_count, 18:16:19 other=other, 18:16:19 history=history, 18:16:19 ) 18:16:19 18:16:19 if new_retry.is_exhausted(): 18:16:19 reason = error or ResponseError(cause) 18:16:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 18:16:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-topology?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 18:16:19 18:16:19 During handling of the above exception, another exception occurred: 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_32_getNodes_OpenRoadmTopology(self): 18:16:19 # pylint: disable=redundant-unittest-assert 18:16:19 > response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:639: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 transportpce_tests/common/test_utils.py:495: in get_ietf_network_request 18:16:19 response = get_request(url[RESTCONF_VERSION].format(*format_args)) 18:16:19 transportpce_tests/common/test_utils.py:116: in get_request 18:16:19 return requests.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 18:16:19 return session.request(method=method, url=url, **kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 18:16:19 resp = self.send(prep, **send_kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 18:16:19 r = adapter.send(request, **kwargs) 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 except (ProtocolError, OSError) as err: 18:16:19 raise ConnectionError(err, request=request) 18:16:19 18:16:19 except MaxRetryError as e: 18:16:19 if isinstance(e.reason, ConnectTimeoutError): 18:16:19 # TODO: Remove this in 3.0.0: see #2811 18:16:19 if not isinstance(e.reason, NewConnectionError): 18:16:19 raise ConnectTimeout(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, ResponseError): 18:16:19 raise RetryError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _ProxyError): 18:16:19 raise ProxyError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _SSLError): 18:16:19 # This branch is for urllib3 v1.22 and later. 18:16:19 raise SSLError(e, request=request) 18:16:19 18:16:19 > raise ConnectionError(e, request=request) 18:16:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-topology?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 18:16:19 ___________ TransportPCETopologyTesting.test_33_getOpenRoadmNetwork ____________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 > sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 18:16:19 raise err 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 address = ('localhost', 8182), timeout = 10, source_address = None 18:16:19 socket_options = [(6, 1, 1)] 18:16:19 18:16:19 def create_connection( 18:16:19 address: tuple[str, int], 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 source_address: tuple[str, int] | None = None, 18:16:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 18:16:19 ) -> socket.socket: 18:16:19 """Connect to *address* and return the socket object. 18:16:19 18:16:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 18:16:19 port)``) and return the socket object. Passing the optional 18:16:19 *timeout* parameter will set the timeout on the socket instance 18:16:19 before attempting to connect. If no *timeout* is supplied, the 18:16:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 18:16:19 is used. If *source_address* is set it must be a tuple of (host, port) 18:16:19 for the socket to bind as a source address before making the connection. 18:16:19 An host of '' or port 0 tells the OS to use the default. 18:16:19 """ 18:16:19 18:16:19 host, port = address 18:16:19 if host.startswith("["): 18:16:19 host = host.strip("[]") 18:16:19 err = None 18:16:19 18:16:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 18:16:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 18:16:19 # The original create_connection function always returns all records. 18:16:19 family = allowed_gai_family() 18:16:19 18:16:19 try: 18:16:19 host.encode("idna") 18:16:19 except UnicodeError: 18:16:19 raise LocationParseError(f"'{host}', label empty or too long") from None 18:16:19 18:16:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 18:16:19 af, socktype, proto, canonname, sa = res 18:16:19 sock = None 18:16:19 try: 18:16:19 sock = socket.socket(af, socktype, proto) 18:16:19 18:16:19 # If provided, set socket level options before connecting. 18:16:19 _set_socket_options(sock, socket_options) 18:16:19 18:16:19 if timeout is not _DEFAULT_TIMEOUT: 18:16:19 sock.settimeout(timeout) 18:16:19 if source_address: 18:16:19 sock.bind(source_address) 18:16:19 > sock.connect(sa) 18:16:19 E ConnectionRefusedError: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 method = 'GET' 18:16:19 url = '/rests/data/ietf-network:networks/network=openroadm-network?content=config' 18:16:19 body = None 18:16:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 18:16:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 redirect = False, assert_same_host = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 18:16:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 18:16:19 decode_content = False, response_kw = {} 18:16:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/ietf-network:networks/network=openroadm-network', query='content=config', fragment=None) 18:16:19 destination_scheme = None, conn = None, release_this_conn = True 18:16:19 http_tunnel_required = False, err = None, clean_exit = False 18:16:19 18:16:19 def urlopen( # type: ignore[override] 18:16:19 self, 18:16:19 method: str, 18:16:19 url: str, 18:16:19 body: _TYPE_BODY | None = None, 18:16:19 headers: typing.Mapping[str, str] | None = None, 18:16:19 retries: Retry | bool | int | None = None, 18:16:19 redirect: bool = True, 18:16:19 assert_same_host: bool = True, 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 pool_timeout: int | None = None, 18:16:19 release_conn: bool | None = None, 18:16:19 chunked: bool = False, 18:16:19 body_pos: _TYPE_BODY_POSITION | None = None, 18:16:19 preload_content: bool = True, 18:16:19 decode_content: bool = True, 18:16:19 **response_kw: typing.Any, 18:16:19 ) -> BaseHTTPResponse: 18:16:19 """ 18:16:19 Get a connection from the pool and perform an HTTP request. This is the 18:16:19 lowest level call for making a request, so you'll need to specify all 18:16:19 the raw details. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 More commonly, it's appropriate to use a convenience method 18:16:19 such as :meth:`request`. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 `release_conn` will only behave as expected if 18:16:19 `preload_content=False` because we want to make 18:16:19 `preload_content=False` the default behaviour someday soon without 18:16:19 breaking backwards compatibility. 18:16:19 18:16:19 :param method: 18:16:19 HTTP request method (such as GET, POST, PUT, etc.) 18:16:19 18:16:19 :param url: 18:16:19 The URL to perform the request on. 18:16:19 18:16:19 :param body: 18:16:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 18:16:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 18:16:19 18:16:19 :param headers: 18:16:19 Dictionary of custom headers to send, such as User-Agent, 18:16:19 If-None-Match, etc. If None, pool headers are used. If provided, 18:16:19 these headers completely replace any pool-specific headers. 18:16:19 18:16:19 :param retries: 18:16:19 Configure the number of retries to allow before raising a 18:16:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 18:16:19 18:16:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 18:16:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 18:16:19 over different types of retries. 18:16:19 Pass an integer number to retry connection errors that many times, 18:16:19 but no other types of errors. Pass zero to never retry. 18:16:19 18:16:19 If ``False``, then retries are disabled and any exception is raised 18:16:19 immediately. Also, instead of raising a MaxRetryError on redirects, 18:16:19 the redirect response will be returned. 18:16:19 18:16:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 18:16:19 18:16:19 :param redirect: 18:16:19 If True, automatically handle redirects (status codes 301, 302, 18:16:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 18:16:19 will disable redirect, too. 18:16:19 18:16:19 :param assert_same_host: 18:16:19 If ``True``, will make sure that the host of the pool requests is 18:16:19 consistent else will raise HostChangedError. When ``False``, you can 18:16:19 use the pool on an HTTP proxy and request foreign hosts. 18:16:19 18:16:19 :param timeout: 18:16:19 If specified, overrides the default timeout for this one 18:16:19 request. It may be a float (in seconds) or an instance of 18:16:19 :class:`urllib3.util.Timeout`. 18:16:19 18:16:19 :param pool_timeout: 18:16:19 If set and the pool is set to block=True, then this method will 18:16:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 18:16:19 connection is available within the time period. 18:16:19 18:16:19 :param bool preload_content: 18:16:19 If True, the response's body will be preloaded into memory. 18:16:19 18:16:19 :param bool decode_content: 18:16:19 If True, will attempt to decode the body based on the 18:16:19 'content-encoding' header. 18:16:19 18:16:19 :param release_conn: 18:16:19 If False, then the urlopen call will not release the connection 18:16:19 back into the pool once a response is received (but will release if 18:16:19 you read the entire contents of the response such as when 18:16:19 `preload_content=True`). This is useful if you're not preloading 18:16:19 the response's content immediately. You will need to call 18:16:19 ``r.release_conn()`` on the response ``r`` to return the connection 18:16:19 back into the pool. If None, it takes the value of ``preload_content`` 18:16:19 which defaults to ``True``. 18:16:19 18:16:19 :param bool chunked: 18:16:19 If True, urllib3 will send the body using chunked transfer 18:16:19 encoding. Otherwise, urllib3 will send the body using the standard 18:16:19 content-length form. Defaults to False. 18:16:19 18:16:19 :param int body_pos: 18:16:19 Position to seek to in file-like body in the event of a retry or 18:16:19 redirect. Typically this won't need to be set because urllib3 will 18:16:19 auto-populate the value when needed. 18:16:19 """ 18:16:19 parsed_url = parse_url(url) 18:16:19 destination_scheme = parsed_url.scheme 18:16:19 18:16:19 if headers is None: 18:16:19 headers = self.headers 18:16:19 18:16:19 if not isinstance(retries, Retry): 18:16:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 18:16:19 18:16:19 if release_conn is None: 18:16:19 release_conn = preload_content 18:16:19 18:16:19 # Check host 18:16:19 if assert_same_host and not self.is_same_host(url): 18:16:19 raise HostChangedError(self, url, retries) 18:16:19 18:16:19 # Ensure that the URL we're connecting to is properly encoded 18:16:19 if url.startswith("/"): 18:16:19 url = to_str(_encode_target(url)) 18:16:19 else: 18:16:19 url = to_str(parsed_url.url) 18:16:19 18:16:19 conn = None 18:16:19 18:16:19 # Track whether `conn` needs to be released before 18:16:19 # returning/raising/recursing. Update this variable if necessary, and 18:16:19 # leave `release_conn` constant throughout the function. That way, if 18:16:19 # the function recurses, the original value of `release_conn` will be 18:16:19 # passed down into the recursive call, and its value will be respected. 18:16:19 # 18:16:19 # See issue #651 [1] for details. 18:16:19 # 18:16:19 # [1] 18:16:19 release_this_conn = release_conn 18:16:19 18:16:19 http_tunnel_required = connection_requires_http_tunnel( 18:16:19 self.proxy, self.proxy_config, destination_scheme 18:16:19 ) 18:16:19 18:16:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 18:16:19 # have to copy the headers dict so we can safely change it without those 18:16:19 # changes being reflected in anyone else's copy. 18:16:19 if not http_tunnel_required: 18:16:19 headers = headers.copy() # type: ignore[attr-defined] 18:16:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 18:16:19 18:16:19 # Must keep the exception bound to a separate variable or else Python 3 18:16:19 # complains about UnboundLocalError. 18:16:19 err = None 18:16:19 18:16:19 # Keep track of whether we cleanly exited the except block. This 18:16:19 # ensures we do proper cleanup in finally. 18:16:19 clean_exit = False 18:16:19 18:16:19 # Rewind body position, if needed. Record current position 18:16:19 # for future rewinds in the event of a redirect/retry. 18:16:19 body_pos = set_file_position(body, body_pos) 18:16:19 18:16:19 try: 18:16:19 # Request a connection from the queue. 18:16:19 timeout_obj = self._get_timeout(timeout) 18:16:19 conn = self._get_conn(timeout=pool_timeout) 18:16:19 18:16:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 18:16:19 18:16:19 # Is this a closed/new connection that requires CONNECT tunnelling? 18:16:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 18:16:19 try: 18:16:19 self._prepare_proxy(conn) 18:16:19 except (BaseSSLError, OSError, SocketTimeout) as e: 18:16:19 self._raise_timeout( 18:16:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 18:16:19 ) 18:16:19 raise 18:16:19 18:16:19 # If we're going to release the connection in ``finally:``, then 18:16:19 # the response doesn't need to know about the connection. Otherwise 18:16:19 # it will also try to release it and we'll have a double-release 18:16:19 # mess. 18:16:19 response_conn = conn if not release_conn else None 18:16:19 18:16:19 # Make the request on the HTTPConnection object 18:16:19 > response = self._make_request( 18:16:19 conn, 18:16:19 method, 18:16:19 url, 18:16:19 timeout=timeout_obj, 18:16:19 body=body, 18:16:19 headers=headers, 18:16:19 chunked=chunked, 18:16:19 retries=retries, 18:16:19 response_conn=response_conn, 18:16:19 preload_content=preload_content, 18:16:19 decode_content=decode_content, 18:16:19 **response_kw, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 18:16:19 conn.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 18:16:19 self.endheaders() 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 18:16:19 self._send_output(message_body, encode_chunked=encode_chunked) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 18:16:19 self.send(msg) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 18:16:19 self.connect() 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 18:16:19 self.sock = self._new_conn() 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 except socket.gaierror as e: 18:16:19 raise NameResolutionError(self.host, self, e) from e 18:16:19 except SocketTimeout as e: 18:16:19 raise ConnectTimeoutError( 18:16:19 self, 18:16:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 18:16:19 ) from e 18:16:19 18:16:19 except OSError as e: 18:16:19 > raise NewConnectionError( 18:16:19 self, f"Failed to establish a new connection: {e}" 18:16:19 ) from e 18:16:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 > resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 18:16:19 retries = retries.increment( 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 method = 'GET' 18:16:19 url = '/rests/data/ietf-network:networks/network=openroadm-network?content=config' 18:16:19 response = None 18:16:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 18:16:19 _pool = 18:16:19 _stacktrace = 18:16:19 18:16:19 def increment( 18:16:19 self, 18:16:19 method: str | None = None, 18:16:19 url: str | None = None, 18:16:19 response: BaseHTTPResponse | None = None, 18:16:19 error: Exception | None = None, 18:16:19 _pool: ConnectionPool | None = None, 18:16:19 _stacktrace: TracebackType | None = None, 18:16:19 ) -> Self: 18:16:19 """Return a new Retry object with incremented retry counters. 18:16:19 18:16:19 :param response: A response object, or None, if the server did not 18:16:19 return a response. 18:16:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 18:16:19 :param Exception error: An error encountered during the request, or 18:16:19 None if the response was received successfully. 18:16:19 18:16:19 :return: A new ``Retry`` object. 18:16:19 """ 18:16:19 if self.total is False and error: 18:16:19 # Disabled, indicate to re-raise the error. 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 18:16:19 total = self.total 18:16:19 if total is not None: 18:16:19 total -= 1 18:16:19 18:16:19 connect = self.connect 18:16:19 read = self.read 18:16:19 redirect = self.redirect 18:16:19 status_count = self.status 18:16:19 other = self.other 18:16:19 cause = "unknown" 18:16:19 status = None 18:16:19 redirect_location = None 18:16:19 18:16:19 if error and self._is_connection_error(error): 18:16:19 # Connect retry? 18:16:19 if connect is False: 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif connect is not None: 18:16:19 connect -= 1 18:16:19 18:16:19 elif error and self._is_read_error(error): 18:16:19 # Read retry? 18:16:19 if read is False or method is None or not self._is_method_retryable(method): 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif read is not None: 18:16:19 read -= 1 18:16:19 18:16:19 elif error: 18:16:19 # Other retry? 18:16:19 if other is not None: 18:16:19 other -= 1 18:16:19 18:16:19 elif response and response.get_redirect_location(): 18:16:19 # Redirect retry? 18:16:19 if redirect is not None: 18:16:19 redirect -= 1 18:16:19 cause = "too many redirects" 18:16:19 response_redirect_location = response.get_redirect_location() 18:16:19 if response_redirect_location: 18:16:19 redirect_location = response_redirect_location 18:16:19 status = response.status 18:16:19 18:16:19 else: 18:16:19 # Incrementing because of a server error like a 500 in 18:16:19 # status_forcelist and the given method is in the allowed_methods 18:16:19 cause = ResponseError.GENERIC_ERROR 18:16:19 if response and response.status: 18:16:19 if status_count is not None: 18:16:19 status_count -= 1 18:16:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 18:16:19 status = response.status 18:16:19 18:16:19 history = self.history + ( 18:16:19 RequestHistory(method, url, error, status, redirect_location), 18:16:19 ) 18:16:19 18:16:19 new_retry = self.new( 18:16:19 total=total, 18:16:19 connect=connect, 18:16:19 read=read, 18:16:19 redirect=redirect, 18:16:19 status=status_count, 18:16:19 other=other, 18:16:19 history=history, 18:16:19 ) 18:16:19 18:16:19 if new_retry.is_exhausted(): 18:16:19 reason = error or ResponseError(cause) 18:16:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 18:16:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-network?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 18:16:19 18:16:19 During handling of the above exception, another exception occurred: 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_33_getOpenRoadmNetwork(self): 18:16:19 > response = test_utils.get_ietf_network_request('openroadm-network', 'config') 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:682: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 transportpce_tests/common/test_utils.py:495: in get_ietf_network_request 18:16:19 response = get_request(url[RESTCONF_VERSION].format(*format_args)) 18:16:19 transportpce_tests/common/test_utils.py:116: in get_request 18:16:19 return requests.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 18:16:19 return session.request(method=method, url=url, **kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 18:16:19 resp = self.send(prep, **send_kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 18:16:19 r = adapter.send(request, **kwargs) 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 except (ProtocolError, OSError) as err: 18:16:19 raise ConnectionError(err, request=request) 18:16:19 18:16:19 except MaxRetryError as e: 18:16:19 if isinstance(e.reason, ConnectTimeoutError): 18:16:19 # TODO: Remove this in 3.0.0: see #2811 18:16:19 if not isinstance(e.reason, NewConnectionError): 18:16:19 raise ConnectTimeout(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, ResponseError): 18:16:19 raise RetryError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _ProxyError): 18:16:19 raise ProxyError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _SSLError): 18:16:19 # This branch is for urllib3 v1.22 and later. 18:16:19 raise SSLError(e, request=request) 18:16:19 18:16:19 > raise ConnectionError(e, request=request) 18:16:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-network?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 18:16:19 ______________ TransportPCETopologyTesting.test_34_getClliNetwork ______________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 > sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 18:16:19 raise err 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 address = ('localhost', 8182), timeout = 10, source_address = None 18:16:19 socket_options = [(6, 1, 1)] 18:16:19 18:16:19 def create_connection( 18:16:19 address: tuple[str, int], 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 source_address: tuple[str, int] | None = None, 18:16:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 18:16:19 ) -> socket.socket: 18:16:19 """Connect to *address* and return the socket object. 18:16:19 18:16:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 18:16:19 port)``) and return the socket object. Passing the optional 18:16:19 *timeout* parameter will set the timeout on the socket instance 18:16:19 before attempting to connect. If no *timeout* is supplied, the 18:16:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 18:16:19 is used. If *source_address* is set it must be a tuple of (host, port) 18:16:19 for the socket to bind as a source address before making the connection. 18:16:19 An host of '' or port 0 tells the OS to use the default. 18:16:19 """ 18:16:19 18:16:19 host, port = address 18:16:19 if host.startswith("["): 18:16:19 host = host.strip("[]") 18:16:19 err = None 18:16:19 18:16:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 18:16:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 18:16:19 # The original create_connection function always returns all records. 18:16:19 family = allowed_gai_family() 18:16:19 18:16:19 try: 18:16:19 host.encode("idna") 18:16:19 except UnicodeError: 18:16:19 raise LocationParseError(f"'{host}', label empty or too long") from None 18:16:19 18:16:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 18:16:19 af, socktype, proto, canonname, sa = res 18:16:19 sock = None 18:16:19 try: 18:16:19 sock = socket.socket(af, socktype, proto) 18:16:19 18:16:19 # If provided, set socket level options before connecting. 18:16:19 _set_socket_options(sock, socket_options) 18:16:19 18:16:19 if timeout is not _DEFAULT_TIMEOUT: 18:16:19 sock.settimeout(timeout) 18:16:19 if source_address: 18:16:19 sock.bind(source_address) 18:16:19 > sock.connect(sa) 18:16:19 E ConnectionRefusedError: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 method = 'GET' 18:16:19 url = '/rests/data/ietf-network:networks/network=clli-network?content=config' 18:16:19 body = None 18:16:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 18:16:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 redirect = False, assert_same_host = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 18:16:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 18:16:19 decode_content = False, response_kw = {} 18:16:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/ietf-network:networks/network=clli-network', query='content=config', fragment=None) 18:16:19 destination_scheme = None, conn = None, release_this_conn = True 18:16:19 http_tunnel_required = False, err = None, clean_exit = False 18:16:19 18:16:19 def urlopen( # type: ignore[override] 18:16:19 self, 18:16:19 method: str, 18:16:19 url: str, 18:16:19 body: _TYPE_BODY | None = None, 18:16:19 headers: typing.Mapping[str, str] | None = None, 18:16:19 retries: Retry | bool | int | None = None, 18:16:19 redirect: bool = True, 18:16:19 assert_same_host: bool = True, 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 pool_timeout: int | None = None, 18:16:19 release_conn: bool | None = None, 18:16:19 chunked: bool = False, 18:16:19 body_pos: _TYPE_BODY_POSITION | None = None, 18:16:19 preload_content: bool = True, 18:16:19 decode_content: bool = True, 18:16:19 **response_kw: typing.Any, 18:16:19 ) -> BaseHTTPResponse: 18:16:19 """ 18:16:19 Get a connection from the pool and perform an HTTP request. This is the 18:16:19 lowest level call for making a request, so you'll need to specify all 18:16:19 the raw details. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 More commonly, it's appropriate to use a convenience method 18:16:19 such as :meth:`request`. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 `release_conn` will only behave as expected if 18:16:19 `preload_content=False` because we want to make 18:16:19 `preload_content=False` the default behaviour someday soon without 18:16:19 breaking backwards compatibility. 18:16:19 18:16:19 :param method: 18:16:19 HTTP request method (such as GET, POST, PUT, etc.) 18:16:19 18:16:19 :param url: 18:16:19 The URL to perform the request on. 18:16:19 18:16:19 :param body: 18:16:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 18:16:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 18:16:19 18:16:19 :param headers: 18:16:19 Dictionary of custom headers to send, such as User-Agent, 18:16:19 If-None-Match, etc. If None, pool headers are used. If provided, 18:16:19 these headers completely replace any pool-specific headers. 18:16:19 18:16:19 :param retries: 18:16:19 Configure the number of retries to allow before raising a 18:16:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 18:16:19 18:16:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 18:16:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 18:16:19 over different types of retries. 18:16:19 Pass an integer number to retry connection errors that many times, 18:16:19 but no other types of errors. Pass zero to never retry. 18:16:19 18:16:19 If ``False``, then retries are disabled and any exception is raised 18:16:19 immediately. Also, instead of raising a MaxRetryError on redirects, 18:16:19 the redirect response will be returned. 18:16:19 18:16:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 18:16:19 18:16:19 :param redirect: 18:16:19 If True, automatically handle redirects (status codes 301, 302, 18:16:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 18:16:19 will disable redirect, too. 18:16:19 18:16:19 :param assert_same_host: 18:16:19 If ``True``, will make sure that the host of the pool requests is 18:16:19 consistent else will raise HostChangedError. When ``False``, you can 18:16:19 use the pool on an HTTP proxy and request foreign hosts. 18:16:19 18:16:19 :param timeout: 18:16:19 If specified, overrides the default timeout for this one 18:16:19 request. It may be a float (in seconds) or an instance of 18:16:19 :class:`urllib3.util.Timeout`. 18:16:19 18:16:19 :param pool_timeout: 18:16:19 If set and the pool is set to block=True, then this method will 18:16:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 18:16:19 connection is available within the time period. 18:16:19 18:16:19 :param bool preload_content: 18:16:19 If True, the response's body will be preloaded into memory. 18:16:19 18:16:19 :param bool decode_content: 18:16:19 If True, will attempt to decode the body based on the 18:16:19 'content-encoding' header. 18:16:19 18:16:19 :param release_conn: 18:16:19 If False, then the urlopen call will not release the connection 18:16:19 back into the pool once a response is received (but will release if 18:16:19 you read the entire contents of the response such as when 18:16:19 `preload_content=True`). This is useful if you're not preloading 18:16:19 the response's content immediately. You will need to call 18:16:19 ``r.release_conn()`` on the response ``r`` to return the connection 18:16:19 back into the pool. If None, it takes the value of ``preload_content`` 18:16:19 which defaults to ``True``. 18:16:19 18:16:19 :param bool chunked: 18:16:19 If True, urllib3 will send the body using chunked transfer 18:16:19 encoding. Otherwise, urllib3 will send the body using the standard 18:16:19 content-length form. Defaults to False. 18:16:19 18:16:19 :param int body_pos: 18:16:19 Position to seek to in file-like body in the event of a retry or 18:16:19 redirect. Typically this won't need to be set because urllib3 will 18:16:19 auto-populate the value when needed. 18:16:19 """ 18:16:19 parsed_url = parse_url(url) 18:16:19 destination_scheme = parsed_url.scheme 18:16:19 18:16:19 if headers is None: 18:16:19 headers = self.headers 18:16:19 18:16:19 if not isinstance(retries, Retry): 18:16:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 18:16:19 18:16:19 if release_conn is None: 18:16:19 release_conn = preload_content 18:16:19 18:16:19 # Check host 18:16:19 if assert_same_host and not self.is_same_host(url): 18:16:19 raise HostChangedError(self, url, retries) 18:16:19 18:16:19 # Ensure that the URL we're connecting to is properly encoded 18:16:19 if url.startswith("/"): 18:16:19 url = to_str(_encode_target(url)) 18:16:19 else: 18:16:19 url = to_str(parsed_url.url) 18:16:19 18:16:19 conn = None 18:16:19 18:16:19 # Track whether `conn` needs to be released before 18:16:19 # returning/raising/recursing. Update this variable if necessary, and 18:16:19 # leave `release_conn` constant throughout the function. That way, if 18:16:19 # the function recurses, the original value of `release_conn` will be 18:16:19 # passed down into the recursive call, and its value will be respected. 18:16:19 # 18:16:19 # See issue #651 [1] for details. 18:16:19 # 18:16:19 # [1] 18:16:19 release_this_conn = release_conn 18:16:19 18:16:19 http_tunnel_required = connection_requires_http_tunnel( 18:16:19 self.proxy, self.proxy_config, destination_scheme 18:16:19 ) 18:16:19 18:16:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 18:16:19 # have to copy the headers dict so we can safely change it without those 18:16:19 # changes being reflected in anyone else's copy. 18:16:19 if not http_tunnel_required: 18:16:19 headers = headers.copy() # type: ignore[attr-defined] 18:16:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 18:16:19 18:16:19 # Must keep the exception bound to a separate variable or else Python 3 18:16:19 # complains about UnboundLocalError. 18:16:19 err = None 18:16:19 18:16:19 # Keep track of whether we cleanly exited the except block. This 18:16:19 # ensures we do proper cleanup in finally. 18:16:19 clean_exit = False 18:16:19 18:16:19 # Rewind body position, if needed. Record current position 18:16:19 # for future rewinds in the event of a redirect/retry. 18:16:19 body_pos = set_file_position(body, body_pos) 18:16:19 18:16:19 try: 18:16:19 # Request a connection from the queue. 18:16:19 timeout_obj = self._get_timeout(timeout) 18:16:19 conn = self._get_conn(timeout=pool_timeout) 18:16:19 18:16:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 18:16:19 18:16:19 # Is this a closed/new connection that requires CONNECT tunnelling? 18:16:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 18:16:19 try: 18:16:19 self._prepare_proxy(conn) 18:16:19 except (BaseSSLError, OSError, SocketTimeout) as e: 18:16:19 self._raise_timeout( 18:16:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 18:16:19 ) 18:16:19 raise 18:16:19 18:16:19 # If we're going to release the connection in ``finally:``, then 18:16:19 # the response doesn't need to know about the connection. Otherwise 18:16:19 # it will also try to release it and we'll have a double-release 18:16:19 # mess. 18:16:19 response_conn = conn if not release_conn else None 18:16:19 18:16:19 # Make the request on the HTTPConnection object 18:16:19 > response = self._make_request( 18:16:19 conn, 18:16:19 method, 18:16:19 url, 18:16:19 timeout=timeout_obj, 18:16:19 body=body, 18:16:19 headers=headers, 18:16:19 chunked=chunked, 18:16:19 retries=retries, 18:16:19 response_conn=response_conn, 18:16:19 preload_content=preload_content, 18:16:19 decode_content=decode_content, 18:16:19 **response_kw, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 18:16:19 conn.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 18:16:19 self.endheaders() 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 18:16:19 self._send_output(message_body, encode_chunked=encode_chunked) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 18:16:19 self.send(msg) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 18:16:19 self.connect() 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 18:16:19 self.sock = self._new_conn() 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 except socket.gaierror as e: 18:16:19 raise NameResolutionError(self.host, self, e) from e 18:16:19 except SocketTimeout as e: 18:16:19 raise ConnectTimeoutError( 18:16:19 self, 18:16:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 18:16:19 ) from e 18:16:19 18:16:19 except OSError as e: 18:16:19 > raise NewConnectionError( 18:16:19 self, f"Failed to establish a new connection: {e}" 18:16:19 ) from e 18:16:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 > resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 18:16:19 retries = retries.increment( 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 method = 'GET' 18:16:19 url = '/rests/data/ietf-network:networks/network=clli-network?content=config' 18:16:19 response = None 18:16:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 18:16:19 _pool = 18:16:19 _stacktrace = 18:16:19 18:16:19 def increment( 18:16:19 self, 18:16:19 method: str | None = None, 18:16:19 url: str | None = None, 18:16:19 response: BaseHTTPResponse | None = None, 18:16:19 error: Exception | None = None, 18:16:19 _pool: ConnectionPool | None = None, 18:16:19 _stacktrace: TracebackType | None = None, 18:16:19 ) -> Self: 18:16:19 """Return a new Retry object with incremented retry counters. 18:16:19 18:16:19 :param response: A response object, or None, if the server did not 18:16:19 return a response. 18:16:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 18:16:19 :param Exception error: An error encountered during the request, or 18:16:19 None if the response was received successfully. 18:16:19 18:16:19 :return: A new ``Retry`` object. 18:16:19 """ 18:16:19 if self.total is False and error: 18:16:19 # Disabled, indicate to re-raise the error. 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 18:16:19 total = self.total 18:16:19 if total is not None: 18:16:19 total -= 1 18:16:19 18:16:19 connect = self.connect 18:16:19 read = self.read 18:16:19 redirect = self.redirect 18:16:19 status_count = self.status 18:16:19 other = self.other 18:16:19 cause = "unknown" 18:16:19 status = None 18:16:19 redirect_location = None 18:16:19 18:16:19 if error and self._is_connection_error(error): 18:16:19 # Connect retry? 18:16:19 if connect is False: 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif connect is not None: 18:16:19 connect -= 1 18:16:19 18:16:19 elif error and self._is_read_error(error): 18:16:19 # Read retry? 18:16:19 if read is False or method is None or not self._is_method_retryable(method): 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif read is not None: 18:16:19 read -= 1 18:16:19 18:16:19 elif error: 18:16:19 # Other retry? 18:16:19 if other is not None: 18:16:19 other -= 1 18:16:19 18:16:19 elif response and response.get_redirect_location(): 18:16:19 # Redirect retry? 18:16:19 if redirect is not None: 18:16:19 redirect -= 1 18:16:19 cause = "too many redirects" 18:16:19 response_redirect_location = response.get_redirect_location() 18:16:19 if response_redirect_location: 18:16:19 redirect_location = response_redirect_location 18:16:19 status = response.status 18:16:19 18:16:19 else: 18:16:19 # Incrementing because of a server error like a 500 in 18:16:19 # status_forcelist and the given method is in the allowed_methods 18:16:19 cause = ResponseError.GENERIC_ERROR 18:16:19 if response and response.status: 18:16:19 if status_count is not None: 18:16:19 status_count -= 1 18:16:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 18:16:19 status = response.status 18:16:19 18:16:19 history = self.history + ( 18:16:19 RequestHistory(method, url, error, status, redirect_location), 18:16:19 ) 18:16:19 18:16:19 new_retry = self.new( 18:16:19 total=total, 18:16:19 connect=connect, 18:16:19 read=read, 18:16:19 redirect=redirect, 18:16:19 status=status_count, 18:16:19 other=other, 18:16:19 history=history, 18:16:19 ) 18:16:19 18:16:19 if new_retry.is_exhausted(): 18:16:19 reason = error or ResponseError(cause) 18:16:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 18:16:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=clli-network?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 18:16:19 18:16:19 During handling of the above exception, another exception occurred: 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_34_getClliNetwork(self): 18:16:19 > response = test_utils.get_ietf_network_request('clli-network', 'config') 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:690: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 transportpce_tests/common/test_utils.py:495: in get_ietf_network_request 18:16:19 response = get_request(url[RESTCONF_VERSION].format(*format_args)) 18:16:19 transportpce_tests/common/test_utils.py:116: in get_request 18:16:19 return requests.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 18:16:19 return session.request(method=method, url=url, **kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 18:16:19 resp = self.send(prep, **send_kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 18:16:19 r = adapter.send(request, **kwargs) 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 except (ProtocolError, OSError) as err: 18:16:19 raise ConnectionError(err, request=request) 18:16:19 18:16:19 except MaxRetryError as e: 18:16:19 if isinstance(e.reason, ConnectTimeoutError): 18:16:19 # TODO: Remove this in 3.0.0: see #2811 18:16:19 if not isinstance(e.reason, NewConnectionError): 18:16:19 raise ConnectTimeout(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, ResponseError): 18:16:19 raise RetryError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _ProxyError): 18:16:19 raise ProxyError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _SSLError): 18:16:19 # This branch is for urllib3 v1.22 and later. 18:16:19 raise SSLError(e, request=request) 18:16:19 18:16:19 > raise ConnectionError(e, request=request) 18:16:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=clli-network?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 18:16:19 _____________ TransportPCETopologyTesting.test_35_disconnect_XPDRA _____________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 > sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 18:16:19 raise err 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 address = ('localhost', 8182), timeout = 10, source_address = None 18:16:19 socket_options = [(6, 1, 1)] 18:16:19 18:16:19 def create_connection( 18:16:19 address: tuple[str, int], 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 source_address: tuple[str, int] | None = None, 18:16:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 18:16:19 ) -> socket.socket: 18:16:19 """Connect to *address* and return the socket object. 18:16:19 18:16:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 18:16:19 port)``) and return the socket object. Passing the optional 18:16:19 *timeout* parameter will set the timeout on the socket instance 18:16:19 before attempting to connect. If no *timeout* is supplied, the 18:16:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 18:16:19 is used. If *source_address* is set it must be a tuple of (host, port) 18:16:19 for the socket to bind as a source address before making the connection. 18:16:19 An host of '' or port 0 tells the OS to use the default. 18:16:19 """ 18:16:19 18:16:19 host, port = address 18:16:19 if host.startswith("["): 18:16:19 host = host.strip("[]") 18:16:19 err = None 18:16:19 18:16:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 18:16:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 18:16:19 # The original create_connection function always returns all records. 18:16:19 family = allowed_gai_family() 18:16:19 18:16:19 try: 18:16:19 host.encode("idna") 18:16:19 except UnicodeError: 18:16:19 raise LocationParseError(f"'{host}', label empty or too long") from None 18:16:19 18:16:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 18:16:19 af, socktype, proto, canonname, sa = res 18:16:19 sock = None 18:16:19 try: 18:16:19 sock = socket.socket(af, socktype, proto) 18:16:19 18:16:19 # If provided, set socket level options before connecting. 18:16:19 _set_socket_options(sock, socket_options) 18:16:19 18:16:19 if timeout is not _DEFAULT_TIMEOUT: 18:16:19 sock.settimeout(timeout) 18:16:19 if source_address: 18:16:19 sock.bind(source_address) 18:16:19 > sock.connect(sa) 18:16:19 E ConnectionRefusedError: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 method = 'DELETE' 18:16:19 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01' 18:16:19 body = None 18:16:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 18:16:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 redirect = False, assert_same_host = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 18:16:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 18:16:19 decode_content = False, response_kw = {} 18:16:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01', query=None, fragment=None) 18:16:19 destination_scheme = None, conn = None, release_this_conn = True 18:16:19 http_tunnel_required = False, err = None, clean_exit = False 18:16:19 18:16:19 def urlopen( # type: ignore[override] 18:16:19 self, 18:16:19 method: str, 18:16:19 url: str, 18:16:19 body: _TYPE_BODY | None = None, 18:16:19 headers: typing.Mapping[str, str] | None = None, 18:16:19 retries: Retry | bool | int | None = None, 18:16:19 redirect: bool = True, 18:16:19 assert_same_host: bool = True, 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 pool_timeout: int | None = None, 18:16:19 release_conn: bool | None = None, 18:16:19 chunked: bool = False, 18:16:19 body_pos: _TYPE_BODY_POSITION | None = None, 18:16:19 preload_content: bool = True, 18:16:19 decode_content: bool = True, 18:16:19 **response_kw: typing.Any, 18:16:19 ) -> BaseHTTPResponse: 18:16:19 """ 18:16:19 Get a connection from the pool and perform an HTTP request. This is the 18:16:19 lowest level call for making a request, so you'll need to specify all 18:16:19 the raw details. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 More commonly, it's appropriate to use a convenience method 18:16:19 such as :meth:`request`. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 `release_conn` will only behave as expected if 18:16:19 `preload_content=False` because we want to make 18:16:19 `preload_content=False` the default behaviour someday soon without 18:16:19 breaking backwards compatibility. 18:16:19 18:16:19 :param method: 18:16:19 HTTP request method (such as GET, POST, PUT, etc.) 18:16:19 18:16:19 :param url: 18:16:19 The URL to perform the request on. 18:16:19 18:16:19 :param body: 18:16:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 18:16:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 18:16:19 18:16:19 :param headers: 18:16:19 Dictionary of custom headers to send, such as User-Agent, 18:16:19 If-None-Match, etc. If None, pool headers are used. If provided, 18:16:19 these headers completely replace any pool-specific headers. 18:16:19 18:16:19 :param retries: 18:16:19 Configure the number of retries to allow before raising a 18:16:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 18:16:19 18:16:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 18:16:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 18:16:19 over different types of retries. 18:16:19 Pass an integer number to retry connection errors that many times, 18:16:19 but no other types of errors. Pass zero to never retry. 18:16:19 18:16:19 If ``False``, then retries are disabled and any exception is raised 18:16:19 immediately. Also, instead of raising a MaxRetryError on redirects, 18:16:19 the redirect response will be returned. 18:16:19 18:16:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 18:16:19 18:16:19 :param redirect: 18:16:19 If True, automatically handle redirects (status codes 301, 302, 18:16:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 18:16:19 will disable redirect, too. 18:16:19 18:16:19 :param assert_same_host: 18:16:19 If ``True``, will make sure that the host of the pool requests is 18:16:19 consistent else will raise HostChangedError. When ``False``, you can 18:16:19 use the pool on an HTTP proxy and request foreign hosts. 18:16:19 18:16:19 :param timeout: 18:16:19 If specified, overrides the default timeout for this one 18:16:19 request. It may be a float (in seconds) or an instance of 18:16:19 :class:`urllib3.util.Timeout`. 18:16:19 18:16:19 :param pool_timeout: 18:16:19 If set and the pool is set to block=True, then this method will 18:16:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 18:16:19 connection is available within the time period. 18:16:19 18:16:19 :param bool preload_content: 18:16:19 If True, the response's body will be preloaded into memory. 18:16:19 18:16:19 :param bool decode_content: 18:16:19 If True, will attempt to decode the body based on the 18:16:19 'content-encoding' header. 18:16:19 18:16:19 :param release_conn: 18:16:19 If False, then the urlopen call will not release the connection 18:16:19 back into the pool once a response is received (but will release if 18:16:19 you read the entire contents of the response such as when 18:16:19 `preload_content=True`). This is useful if you're not preloading 18:16:19 the response's content immediately. You will need to call 18:16:19 ``r.release_conn()`` on the response ``r`` to return the connection 18:16:19 back into the pool. If None, it takes the value of ``preload_content`` 18:16:19 which defaults to ``True``. 18:16:19 18:16:19 :param bool chunked: 18:16:19 If True, urllib3 will send the body using chunked transfer 18:16:19 encoding. Otherwise, urllib3 will send the body using the standard 18:16:19 content-length form. Defaults to False. 18:16:19 18:16:19 :param int body_pos: 18:16:19 Position to seek to in file-like body in the event of a retry or 18:16:19 redirect. Typically this won't need to be set because urllib3 will 18:16:19 auto-populate the value when needed. 18:16:19 """ 18:16:19 parsed_url = parse_url(url) 18:16:19 destination_scheme = parsed_url.scheme 18:16:19 18:16:19 if headers is None: 18:16:19 headers = self.headers 18:16:19 18:16:19 if not isinstance(retries, Retry): 18:16:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 18:16:19 18:16:19 if release_conn is None: 18:16:19 release_conn = preload_content 18:16:19 18:16:19 # Check host 18:16:19 if assert_same_host and not self.is_same_host(url): 18:16:19 raise HostChangedError(self, url, retries) 18:16:19 18:16:19 # Ensure that the URL we're connecting to is properly encoded 18:16:19 if url.startswith("/"): 18:16:19 url = to_str(_encode_target(url)) 18:16:19 else: 18:16:19 url = to_str(parsed_url.url) 18:16:19 18:16:19 conn = None 18:16:19 18:16:19 # Track whether `conn` needs to be released before 18:16:19 # returning/raising/recursing. Update this variable if necessary, and 18:16:19 # leave `release_conn` constant throughout the function. That way, if 18:16:19 # the function recurses, the original value of `release_conn` will be 18:16:19 # passed down into the recursive call, and its value will be respected. 18:16:19 # 18:16:19 # See issue #651 [1] for details. 18:16:19 # 18:16:19 # [1] 18:16:19 release_this_conn = release_conn 18:16:19 18:16:19 http_tunnel_required = connection_requires_http_tunnel( 18:16:19 self.proxy, self.proxy_config, destination_scheme 18:16:19 ) 18:16:19 18:16:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 18:16:19 # have to copy the headers dict so we can safely change it without those 18:16:19 # changes being reflected in anyone else's copy. 18:16:19 if not http_tunnel_required: 18:16:19 headers = headers.copy() # type: ignore[attr-defined] 18:16:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 18:16:19 18:16:19 # Must keep the exception bound to a separate variable or else Python 3 18:16:19 # complains about UnboundLocalError. 18:16:19 err = None 18:16:19 18:16:19 # Keep track of whether we cleanly exited the except block. This 18:16:19 # ensures we do proper cleanup in finally. 18:16:19 clean_exit = False 18:16:19 18:16:19 # Rewind body position, if needed. Record current position 18:16:19 # for future rewinds in the event of a redirect/retry. 18:16:19 body_pos = set_file_position(body, body_pos) 18:16:19 18:16:19 try: 18:16:19 # Request a connection from the queue. 18:16:19 timeout_obj = self._get_timeout(timeout) 18:16:19 conn = self._get_conn(timeout=pool_timeout) 18:16:19 18:16:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 18:16:19 18:16:19 # Is this a closed/new connection that requires CONNECT tunnelling? 18:16:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 18:16:19 try: 18:16:19 self._prepare_proxy(conn) 18:16:19 except (BaseSSLError, OSError, SocketTimeout) as e: 18:16:19 self._raise_timeout( 18:16:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 18:16:19 ) 18:16:19 raise 18:16:19 18:16:19 # If we're going to release the connection in ``finally:``, then 18:16:19 # the response doesn't need to know about the connection. Otherwise 18:16:19 # it will also try to release it and we'll have a double-release 18:16:19 # mess. 18:16:19 response_conn = conn if not release_conn else None 18:16:19 18:16:19 # Make the request on the HTTPConnection object 18:16:19 > response = self._make_request( 18:16:19 conn, 18:16:19 method, 18:16:19 url, 18:16:19 timeout=timeout_obj, 18:16:19 body=body, 18:16:19 headers=headers, 18:16:19 chunked=chunked, 18:16:19 retries=retries, 18:16:19 response_conn=response_conn, 18:16:19 preload_content=preload_content, 18:16:19 decode_content=decode_content, 18:16:19 **response_kw, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 18:16:19 conn.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 18:16:19 self.endheaders() 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 18:16:19 self._send_output(message_body, encode_chunked=encode_chunked) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 18:16:19 self.send(msg) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 18:16:19 self.connect() 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 18:16:19 self.sock = self._new_conn() 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 except socket.gaierror as e: 18:16:19 raise NameResolutionError(self.host, self, e) from e 18:16:19 except SocketTimeout as e: 18:16:19 raise ConnectTimeoutError( 18:16:19 self, 18:16:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 18:16:19 ) from e 18:16:19 18:16:19 except OSError as e: 18:16:19 > raise NewConnectionError( 18:16:19 self, f"Failed to establish a new connection: {e}" 18:16:19 ) from e 18:16:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 > resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 18:16:19 retries = retries.increment( 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 method = 'DELETE' 18:16:19 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01' 18:16:19 response = None 18:16:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 18:16:19 _pool = 18:16:19 _stacktrace = 18:16:19 18:16:19 def increment( 18:16:19 self, 18:16:19 method: str | None = None, 18:16:19 url: str | None = None, 18:16:19 response: BaseHTTPResponse | None = None, 18:16:19 error: Exception | None = None, 18:16:19 _pool: ConnectionPool | None = None, 18:16:19 _stacktrace: TracebackType | None = None, 18:16:19 ) -> Self: 18:16:19 """Return a new Retry object with incremented retry counters. 18:16:19 18:16:19 :param response: A response object, or None, if the server did not 18:16:19 return a response. 18:16:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 18:16:19 :param Exception error: An error encountered during the request, or 18:16:19 None if the response was received successfully. 18:16:19 18:16:19 :return: A new ``Retry`` object. 18:16:19 """ 18:16:19 if self.total is False and error: 18:16:19 # Disabled, indicate to re-raise the error. 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 18:16:19 total = self.total 18:16:19 if total is not None: 18:16:19 total -= 1 18:16:19 18:16:19 connect = self.connect 18:16:19 read = self.read 18:16:19 redirect = self.redirect 18:16:19 status_count = self.status 18:16:19 other = self.other 18:16:19 cause = "unknown" 18:16:19 status = None 18:16:19 redirect_location = None 18:16:19 18:16:19 if error and self._is_connection_error(error): 18:16:19 # Connect retry? 18:16:19 if connect is False: 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif connect is not None: 18:16:19 connect -= 1 18:16:19 18:16:19 elif error and self._is_read_error(error): 18:16:19 # Read retry? 18:16:19 if read is False or method is None or not self._is_method_retryable(method): 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif read is not None: 18:16:19 read -= 1 18:16:19 18:16:19 elif error: 18:16:19 # Other retry? 18:16:19 if other is not None: 18:16:19 other -= 1 18:16:19 18:16:19 elif response and response.get_redirect_location(): 18:16:19 # Redirect retry? 18:16:19 if redirect is not None: 18:16:19 redirect -= 1 18:16:19 cause = "too many redirects" 18:16:19 response_redirect_location = response.get_redirect_location() 18:16:19 if response_redirect_location: 18:16:19 redirect_location = response_redirect_location 18:16:19 status = response.status 18:16:19 18:16:19 else: 18:16:19 # Incrementing because of a server error like a 500 in 18:16:19 # status_forcelist and the given method is in the allowed_methods 18:16:19 cause = ResponseError.GENERIC_ERROR 18:16:19 if response and response.status: 18:16:19 if status_count is not None: 18:16:19 status_count -= 1 18:16:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 18:16:19 status = response.status 18:16:19 18:16:19 history = self.history + ( 18:16:19 RequestHistory(method, url, error, status, redirect_location), 18:16:19 ) 18:16:19 18:16:19 new_retry = self.new( 18:16:19 total=total, 18:16:19 connect=connect, 18:16:19 read=read, 18:16:19 redirect=redirect, 18:16:19 status=status_count, 18:16:19 other=other, 18:16:19 history=history, 18:16:19 ) 18:16:19 18:16:19 if new_retry.is_exhausted(): 18:16:19 reason = error or ResponseError(cause) 18:16:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 18:16:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 18:16:19 18:16:19 During handling of the above exception, another exception occurred: 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_35_disconnect_XPDRA(self): 18:16:19 > response = test_utils.unmount_device("XPDRA01") 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:696: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 transportpce_tests/common/test_utils.py:358: in unmount_device 18:16:19 response = delete_request(url[RESTCONF_VERSION].format('{}', node)) 18:16:19 transportpce_tests/common/test_utils.py:133: in delete_request 18:16:19 return requests.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 18:16:19 return session.request(method=method, url=url, **kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 18:16:19 resp = self.send(prep, **send_kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 18:16:19 r = adapter.send(request, **kwargs) 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 except (ProtocolError, OSError) as err: 18:16:19 raise ConnectionError(err, request=request) 18:16:19 18:16:19 except MaxRetryError as e: 18:16:19 if isinstance(e.reason, ConnectTimeoutError): 18:16:19 # TODO: Remove this in 3.0.0: see #2811 18:16:19 if not isinstance(e.reason, NewConnectionError): 18:16:19 raise ConnectTimeout(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, ResponseError): 18:16:19 raise RetryError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _ProxyError): 18:16:19 raise ProxyError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _SSLError): 18:16:19 # This branch is for urllib3 v1.22 and later. 18:16:19 raise SSLError(e, request=request) 18:16:19 18:16:19 > raise ConnectionError(e, request=request) 18:16:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 18:16:19 ______________ TransportPCETopologyTesting.test_36_getClliNetwork ______________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 > sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 18:16:19 raise err 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 address = ('localhost', 8182), timeout = 10, source_address = None 18:16:19 socket_options = [(6, 1, 1)] 18:16:19 18:16:19 def create_connection( 18:16:19 address: tuple[str, int], 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 source_address: tuple[str, int] | None = None, 18:16:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 18:16:19 ) -> socket.socket: 18:16:19 """Connect to *address* and return the socket object. 18:16:19 18:16:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 18:16:19 port)``) and return the socket object. Passing the optional 18:16:19 *timeout* parameter will set the timeout on the socket instance 18:16:19 before attempting to connect. If no *timeout* is supplied, the 18:16:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 18:16:19 is used. If *source_address* is set it must be a tuple of (host, port) 18:16:19 for the socket to bind as a source address before making the connection. 18:16:19 An host of '' or port 0 tells the OS to use the default. 18:16:19 """ 18:16:19 18:16:19 host, port = address 18:16:19 if host.startswith("["): 18:16:19 host = host.strip("[]") 18:16:19 err = None 18:16:19 18:16:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 18:16:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 18:16:19 # The original create_connection function always returns all records. 18:16:19 family = allowed_gai_family() 18:16:19 18:16:19 try: 18:16:19 host.encode("idna") 18:16:19 except UnicodeError: 18:16:19 raise LocationParseError(f"'{host}', label empty or too long") from None 18:16:19 18:16:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 18:16:19 af, socktype, proto, canonname, sa = res 18:16:19 sock = None 18:16:19 try: 18:16:19 sock = socket.socket(af, socktype, proto) 18:16:19 18:16:19 # If provided, set socket level options before connecting. 18:16:19 _set_socket_options(sock, socket_options) 18:16:19 18:16:19 if timeout is not _DEFAULT_TIMEOUT: 18:16:19 sock.settimeout(timeout) 18:16:19 if source_address: 18:16:19 sock.bind(source_address) 18:16:19 > sock.connect(sa) 18:16:19 E ConnectionRefusedError: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 method = 'GET' 18:16:19 url = '/rests/data/ietf-network:networks/network=clli-network?content=config' 18:16:19 body = None 18:16:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 18:16:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 redirect = False, assert_same_host = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 18:16:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 18:16:19 decode_content = False, response_kw = {} 18:16:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/ietf-network:networks/network=clli-network', query='content=config', fragment=None) 18:16:19 destination_scheme = None, conn = None, release_this_conn = True 18:16:19 http_tunnel_required = False, err = None, clean_exit = False 18:16:19 18:16:19 def urlopen( # type: ignore[override] 18:16:19 self, 18:16:19 method: str, 18:16:19 url: str, 18:16:19 body: _TYPE_BODY | None = None, 18:16:19 headers: typing.Mapping[str, str] | None = None, 18:16:19 retries: Retry | bool | int | None = None, 18:16:19 redirect: bool = True, 18:16:19 assert_same_host: bool = True, 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 pool_timeout: int | None = None, 18:16:19 release_conn: bool | None = None, 18:16:19 chunked: bool = False, 18:16:19 body_pos: _TYPE_BODY_POSITION | None = None, 18:16:19 preload_content: bool = True, 18:16:19 decode_content: bool = True, 18:16:19 **response_kw: typing.Any, 18:16:19 ) -> BaseHTTPResponse: 18:16:19 """ 18:16:19 Get a connection from the pool and perform an HTTP request. This is the 18:16:19 lowest level call for making a request, so you'll need to specify all 18:16:19 the raw details. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 More commonly, it's appropriate to use a convenience method 18:16:19 such as :meth:`request`. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 `release_conn` will only behave as expected if 18:16:19 `preload_content=False` because we want to make 18:16:19 `preload_content=False` the default behaviour someday soon without 18:16:19 breaking backwards compatibility. 18:16:19 18:16:19 :param method: 18:16:19 HTTP request method (such as GET, POST, PUT, etc.) 18:16:19 18:16:19 :param url: 18:16:19 The URL to perform the request on. 18:16:19 18:16:19 :param body: 18:16:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 18:16:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 18:16:19 18:16:19 :param headers: 18:16:19 Dictionary of custom headers to send, such as User-Agent, 18:16:19 If-None-Match, etc. If None, pool headers are used. If provided, 18:16:19 these headers completely replace any pool-specific headers. 18:16:19 18:16:19 :param retries: 18:16:19 Configure the number of retries to allow before raising a 18:16:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 18:16:19 18:16:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 18:16:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 18:16:19 over different types of retries. 18:16:19 Pass an integer number to retry connection errors that many times, 18:16:19 but no other types of errors. Pass zero to never retry. 18:16:19 18:16:19 If ``False``, then retries are disabled and any exception is raised 18:16:19 immediately. Also, instead of raising a MaxRetryError on redirects, 18:16:19 the redirect response will be returned. 18:16:19 18:16:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 18:16:19 18:16:19 :param redirect: 18:16:19 If True, automatically handle redirects (status codes 301, 302, 18:16:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 18:16:19 will disable redirect, too. 18:16:19 18:16:19 :param assert_same_host: 18:16:19 If ``True``, will make sure that the host of the pool requests is 18:16:19 consistent else will raise HostChangedError. When ``False``, you can 18:16:19 use the pool on an HTTP proxy and request foreign hosts. 18:16:19 18:16:19 :param timeout: 18:16:19 If specified, overrides the default timeout for this one 18:16:19 request. It may be a float (in seconds) or an instance of 18:16:19 :class:`urllib3.util.Timeout`. 18:16:19 18:16:19 :param pool_timeout: 18:16:19 If set and the pool is set to block=True, then this method will 18:16:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 18:16:19 connection is available within the time period. 18:16:19 18:16:19 :param bool preload_content: 18:16:19 If True, the response's body will be preloaded into memory. 18:16:19 18:16:19 :param bool decode_content: 18:16:19 If True, will attempt to decode the body based on the 18:16:19 'content-encoding' header. 18:16:19 18:16:19 :param release_conn: 18:16:19 If False, then the urlopen call will not release the connection 18:16:19 back into the pool once a response is received (but will release if 18:16:19 you read the entire contents of the response such as when 18:16:19 `preload_content=True`). This is useful if you're not preloading 18:16:19 the response's content immediately. You will need to call 18:16:19 ``r.release_conn()`` on the response ``r`` to return the connection 18:16:19 back into the pool. If None, it takes the value of ``preload_content`` 18:16:19 which defaults to ``True``. 18:16:19 18:16:19 :param bool chunked: 18:16:19 If True, urllib3 will send the body using chunked transfer 18:16:19 encoding. Otherwise, urllib3 will send the body using the standard 18:16:19 content-length form. Defaults to False. 18:16:19 18:16:19 :param int body_pos: 18:16:19 Position to seek to in file-like body in the event of a retry or 18:16:19 redirect. Typically this won't need to be set because urllib3 will 18:16:19 auto-populate the value when needed. 18:16:19 """ 18:16:19 parsed_url = parse_url(url) 18:16:19 destination_scheme = parsed_url.scheme 18:16:19 18:16:19 if headers is None: 18:16:19 headers = self.headers 18:16:19 18:16:19 if not isinstance(retries, Retry): 18:16:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 18:16:19 18:16:19 if release_conn is None: 18:16:19 release_conn = preload_content 18:16:19 18:16:19 # Check host 18:16:19 if assert_same_host and not self.is_same_host(url): 18:16:19 raise HostChangedError(self, url, retries) 18:16:19 18:16:19 # Ensure that the URL we're connecting to is properly encoded 18:16:19 if url.startswith("/"): 18:16:19 url = to_str(_encode_target(url)) 18:16:19 else: 18:16:19 url = to_str(parsed_url.url) 18:16:19 18:16:19 conn = None 18:16:19 18:16:19 # Track whether `conn` needs to be released before 18:16:19 # returning/raising/recursing. Update this variable if necessary, and 18:16:19 # leave `release_conn` constant throughout the function. That way, if 18:16:19 # the function recurses, the original value of `release_conn` will be 18:16:19 # passed down into the recursive call, and its value will be respected. 18:16:19 # 18:16:19 # See issue #651 [1] for details. 18:16:19 # 18:16:19 # [1] 18:16:19 release_this_conn = release_conn 18:16:19 18:16:19 http_tunnel_required = connection_requires_http_tunnel( 18:16:19 self.proxy, self.proxy_config, destination_scheme 18:16:19 ) 18:16:19 18:16:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 18:16:19 # have to copy the headers dict so we can safely change it without those 18:16:19 # changes being reflected in anyone else's copy. 18:16:19 if not http_tunnel_required: 18:16:19 headers = headers.copy() # type: ignore[attr-defined] 18:16:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 18:16:19 18:16:19 # Must keep the exception bound to a separate variable or else Python 3 18:16:19 # complains about UnboundLocalError. 18:16:19 err = None 18:16:19 18:16:19 # Keep track of whether we cleanly exited the except block. This 18:16:19 # ensures we do proper cleanup in finally. 18:16:19 clean_exit = False 18:16:19 18:16:19 # Rewind body position, if needed. Record current position 18:16:19 # for future rewinds in the event of a redirect/retry. 18:16:19 body_pos = set_file_position(body, body_pos) 18:16:19 18:16:19 try: 18:16:19 # Request a connection from the queue. 18:16:19 timeout_obj = self._get_timeout(timeout) 18:16:19 conn = self._get_conn(timeout=pool_timeout) 18:16:19 18:16:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 18:16:19 18:16:19 # Is this a closed/new connection that requires CONNECT tunnelling? 18:16:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 18:16:19 try: 18:16:19 self._prepare_proxy(conn) 18:16:19 except (BaseSSLError, OSError, SocketTimeout) as e: 18:16:19 self._raise_timeout( 18:16:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 18:16:19 ) 18:16:19 raise 18:16:19 18:16:19 # If we're going to release the connection in ``finally:``, then 18:16:19 # the response doesn't need to know about the connection. Otherwise 18:16:19 # it will also try to release it and we'll have a double-release 18:16:19 # mess. 18:16:19 response_conn = conn if not release_conn else None 18:16:19 18:16:19 # Make the request on the HTTPConnection object 18:16:19 > response = self._make_request( 18:16:19 conn, 18:16:19 method, 18:16:19 url, 18:16:19 timeout=timeout_obj, 18:16:19 body=body, 18:16:19 headers=headers, 18:16:19 chunked=chunked, 18:16:19 retries=retries, 18:16:19 response_conn=response_conn, 18:16:19 preload_content=preload_content, 18:16:19 decode_content=decode_content, 18:16:19 **response_kw, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 18:16:19 conn.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 18:16:19 self.endheaders() 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 18:16:19 self._send_output(message_body, encode_chunked=encode_chunked) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 18:16:19 self.send(msg) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 18:16:19 self.connect() 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 18:16:19 self.sock = self._new_conn() 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 except socket.gaierror as e: 18:16:19 raise NameResolutionError(self.host, self, e) from e 18:16:19 except SocketTimeout as e: 18:16:19 raise ConnectTimeoutError( 18:16:19 self, 18:16:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 18:16:19 ) from e 18:16:19 18:16:19 except OSError as e: 18:16:19 > raise NewConnectionError( 18:16:19 self, f"Failed to establish a new connection: {e}" 18:16:19 ) from e 18:16:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 > resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 18:16:19 retries = retries.increment( 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 method = 'GET' 18:16:19 url = '/rests/data/ietf-network:networks/network=clli-network?content=config' 18:16:19 response = None 18:16:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 18:16:19 _pool = 18:16:19 _stacktrace = 18:16:19 18:16:19 def increment( 18:16:19 self, 18:16:19 method: str | None = None, 18:16:19 url: str | None = None, 18:16:19 response: BaseHTTPResponse | None = None, 18:16:19 error: Exception | None = None, 18:16:19 _pool: ConnectionPool | None = None, 18:16:19 _stacktrace: TracebackType | None = None, 18:16:19 ) -> Self: 18:16:19 """Return a new Retry object with incremented retry counters. 18:16:19 18:16:19 :param response: A response object, or None, if the server did not 18:16:19 return a response. 18:16:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 18:16:19 :param Exception error: An error encountered during the request, or 18:16:19 None if the response was received successfully. 18:16:19 18:16:19 :return: A new ``Retry`` object. 18:16:19 """ 18:16:19 if self.total is False and error: 18:16:19 # Disabled, indicate to re-raise the error. 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 18:16:19 total = self.total 18:16:19 if total is not None: 18:16:19 total -= 1 18:16:19 18:16:19 connect = self.connect 18:16:19 read = self.read 18:16:19 redirect = self.redirect 18:16:19 status_count = self.status 18:16:19 other = self.other 18:16:19 cause = "unknown" 18:16:19 status = None 18:16:19 redirect_location = None 18:16:19 18:16:19 if error and self._is_connection_error(error): 18:16:19 # Connect retry? 18:16:19 if connect is False: 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif connect is not None: 18:16:19 connect -= 1 18:16:19 18:16:19 elif error and self._is_read_error(error): 18:16:19 # Read retry? 18:16:19 if read is False or method is None or not self._is_method_retryable(method): 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif read is not None: 18:16:19 read -= 1 18:16:19 18:16:19 elif error: 18:16:19 # Other retry? 18:16:19 if other is not None: 18:16:19 other -= 1 18:16:19 18:16:19 elif response and response.get_redirect_location(): 18:16:19 # Redirect retry? 18:16:19 if redirect is not None: 18:16:19 redirect -= 1 18:16:19 cause = "too many redirects" 18:16:19 response_redirect_location = response.get_redirect_location() 18:16:19 if response_redirect_location: 18:16:19 redirect_location = response_redirect_location 18:16:19 status = response.status 18:16:19 18:16:19 else: 18:16:19 # Incrementing because of a server error like a 500 in 18:16:19 # status_forcelist and the given method is in the allowed_methods 18:16:19 cause = ResponseError.GENERIC_ERROR 18:16:19 if response and response.status: 18:16:19 if status_count is not None: 18:16:19 status_count -= 1 18:16:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 18:16:19 status = response.status 18:16:19 18:16:19 history = self.history + ( 18:16:19 RequestHistory(method, url, error, status, redirect_location), 18:16:19 ) 18:16:19 18:16:19 new_retry = self.new( 18:16:19 total=total, 18:16:19 connect=connect, 18:16:19 read=read, 18:16:19 redirect=redirect, 18:16:19 status=status_count, 18:16:19 other=other, 18:16:19 history=history, 18:16:19 ) 18:16:19 18:16:19 if new_retry.is_exhausted(): 18:16:19 reason = error or ResponseError(cause) 18:16:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 18:16:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=clli-network?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 18:16:19 18:16:19 During handling of the above exception, another exception occurred: 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_36_getClliNetwork(self): 18:16:19 > response = test_utils.get_ietf_network_request('clli-network', 'config') 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:700: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 transportpce_tests/common/test_utils.py:495: in get_ietf_network_request 18:16:19 response = get_request(url[RESTCONF_VERSION].format(*format_args)) 18:16:19 transportpce_tests/common/test_utils.py:116: in get_request 18:16:19 return requests.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 18:16:19 return session.request(method=method, url=url, **kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 18:16:19 resp = self.send(prep, **send_kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 18:16:19 r = adapter.send(request, **kwargs) 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 except (ProtocolError, OSError) as err: 18:16:19 raise ConnectionError(err, request=request) 18:16:19 18:16:19 except MaxRetryError as e: 18:16:19 if isinstance(e.reason, ConnectTimeoutError): 18:16:19 # TODO: Remove this in 3.0.0: see #2811 18:16:19 if not isinstance(e.reason, NewConnectionError): 18:16:19 raise ConnectTimeout(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, ResponseError): 18:16:19 raise RetryError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _ProxyError): 18:16:19 raise ProxyError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _SSLError): 18:16:19 # This branch is for urllib3 v1.22 and later. 18:16:19 raise SSLError(e, request=request) 18:16:19 18:16:19 > raise ConnectionError(e, request=request) 18:16:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=clli-network?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 18:16:19 ___________ TransportPCETopologyTesting.test_37_getOpenRoadmNetwork ____________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 > sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 18:16:19 raise err 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 address = ('localhost', 8182), timeout = 10, source_address = None 18:16:19 socket_options = [(6, 1, 1)] 18:16:19 18:16:19 def create_connection( 18:16:19 address: tuple[str, int], 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 source_address: tuple[str, int] | None = None, 18:16:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 18:16:19 ) -> socket.socket: 18:16:19 """Connect to *address* and return the socket object. 18:16:19 18:16:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 18:16:19 port)``) and return the socket object. Passing the optional 18:16:19 *timeout* parameter will set the timeout on the socket instance 18:16:19 before attempting to connect. If no *timeout* is supplied, the 18:16:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 18:16:19 is used. If *source_address* is set it must be a tuple of (host, port) 18:16:19 for the socket to bind as a source address before making the connection. 18:16:19 An host of '' or port 0 tells the OS to use the default. 18:16:19 """ 18:16:19 18:16:19 host, port = address 18:16:19 if host.startswith("["): 18:16:19 host = host.strip("[]") 18:16:19 err = None 18:16:19 18:16:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 18:16:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 18:16:19 # The original create_connection function always returns all records. 18:16:19 family = allowed_gai_family() 18:16:19 18:16:19 try: 18:16:19 host.encode("idna") 18:16:19 except UnicodeError: 18:16:19 raise LocationParseError(f"'{host}', label empty or too long") from None 18:16:19 18:16:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 18:16:19 af, socktype, proto, canonname, sa = res 18:16:19 sock = None 18:16:19 try: 18:16:19 sock = socket.socket(af, socktype, proto) 18:16:19 18:16:19 # If provided, set socket level options before connecting. 18:16:19 _set_socket_options(sock, socket_options) 18:16:19 18:16:19 if timeout is not _DEFAULT_TIMEOUT: 18:16:19 sock.settimeout(timeout) 18:16:19 if source_address: 18:16:19 sock.bind(source_address) 18:16:19 > sock.connect(sa) 18:16:19 E ConnectionRefusedError: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 method = 'GET' 18:16:19 url = '/rests/data/ietf-network:networks/network=openroadm-network?content=config' 18:16:19 body = None 18:16:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 18:16:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 redirect = False, assert_same_host = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 18:16:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 18:16:19 decode_content = False, response_kw = {} 18:16:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/ietf-network:networks/network=openroadm-network', query='content=config', fragment=None) 18:16:19 destination_scheme = None, conn = None, release_this_conn = True 18:16:19 http_tunnel_required = False, err = None, clean_exit = False 18:16:19 18:16:19 def urlopen( # type: ignore[override] 18:16:19 self, 18:16:19 method: str, 18:16:19 url: str, 18:16:19 body: _TYPE_BODY | None = None, 18:16:19 headers: typing.Mapping[str, str] | None = None, 18:16:19 retries: Retry | bool | int | None = None, 18:16:19 redirect: bool = True, 18:16:19 assert_same_host: bool = True, 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 pool_timeout: int | None = None, 18:16:19 release_conn: bool | None = None, 18:16:19 chunked: bool = False, 18:16:19 body_pos: _TYPE_BODY_POSITION | None = None, 18:16:19 preload_content: bool = True, 18:16:19 decode_content: bool = True, 18:16:19 **response_kw: typing.Any, 18:16:19 ) -> BaseHTTPResponse: 18:16:19 """ 18:16:19 Get a connection from the pool and perform an HTTP request. This is the 18:16:19 lowest level call for making a request, so you'll need to specify all 18:16:19 the raw details. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 More commonly, it's appropriate to use a convenience method 18:16:19 such as :meth:`request`. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 `release_conn` will only behave as expected if 18:16:19 `preload_content=False` because we want to make 18:16:19 `preload_content=False` the default behaviour someday soon without 18:16:19 breaking backwards compatibility. 18:16:19 18:16:19 :param method: 18:16:19 HTTP request method (such as GET, POST, PUT, etc.) 18:16:19 18:16:19 :param url: 18:16:19 The URL to perform the request on. 18:16:19 18:16:19 :param body: 18:16:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 18:16:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 18:16:19 18:16:19 :param headers: 18:16:19 Dictionary of custom headers to send, such as User-Agent, 18:16:19 If-None-Match, etc. If None, pool headers are used. If provided, 18:16:19 these headers completely replace any pool-specific headers. 18:16:19 18:16:19 :param retries: 18:16:19 Configure the number of retries to allow before raising a 18:16:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 18:16:19 18:16:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 18:16:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 18:16:19 over different types of retries. 18:16:19 Pass an integer number to retry connection errors that many times, 18:16:19 but no other types of errors. Pass zero to never retry. 18:16:19 18:16:19 If ``False``, then retries are disabled and any exception is raised 18:16:19 immediately. Also, instead of raising a MaxRetryError on redirects, 18:16:19 the redirect response will be returned. 18:16:19 18:16:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 18:16:19 18:16:19 :param redirect: 18:16:19 If True, automatically handle redirects (status codes 301, 302, 18:16:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 18:16:19 will disable redirect, too. 18:16:19 18:16:19 :param assert_same_host: 18:16:19 If ``True``, will make sure that the host of the pool requests is 18:16:19 consistent else will raise HostChangedError. When ``False``, you can 18:16:19 use the pool on an HTTP proxy and request foreign hosts. 18:16:19 18:16:19 :param timeout: 18:16:19 If specified, overrides the default timeout for this one 18:16:19 request. It may be a float (in seconds) or an instance of 18:16:19 :class:`urllib3.util.Timeout`. 18:16:19 18:16:19 :param pool_timeout: 18:16:19 If set and the pool is set to block=True, then this method will 18:16:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 18:16:19 connection is available within the time period. 18:16:19 18:16:19 :param bool preload_content: 18:16:19 If True, the response's body will be preloaded into memory. 18:16:19 18:16:19 :param bool decode_content: 18:16:19 If True, will attempt to decode the body based on the 18:16:19 'content-encoding' header. 18:16:19 18:16:19 :param release_conn: 18:16:19 If False, then the urlopen call will not release the connection 18:16:19 back into the pool once a response is received (but will release if 18:16:19 you read the entire contents of the response such as when 18:16:19 `preload_content=True`). This is useful if you're not preloading 18:16:19 the response's content immediately. You will need to call 18:16:19 ``r.release_conn()`` on the response ``r`` to return the connection 18:16:19 back into the pool. If None, it takes the value of ``preload_content`` 18:16:19 which defaults to ``True``. 18:16:19 18:16:19 :param bool chunked: 18:16:19 If True, urllib3 will send the body using chunked transfer 18:16:19 encoding. Otherwise, urllib3 will send the body using the standard 18:16:19 content-length form. Defaults to False. 18:16:19 18:16:19 :param int body_pos: 18:16:19 Position to seek to in file-like body in the event of a retry or 18:16:19 redirect. Typically this won't need to be set because urllib3 will 18:16:19 auto-populate the value when needed. 18:16:19 """ 18:16:19 parsed_url = parse_url(url) 18:16:19 destination_scheme = parsed_url.scheme 18:16:19 18:16:19 if headers is None: 18:16:19 headers = self.headers 18:16:19 18:16:19 if not isinstance(retries, Retry): 18:16:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 18:16:19 18:16:19 if release_conn is None: 18:16:19 release_conn = preload_content 18:16:19 18:16:19 # Check host 18:16:19 if assert_same_host and not self.is_same_host(url): 18:16:19 raise HostChangedError(self, url, retries) 18:16:19 18:16:19 # Ensure that the URL we're connecting to is properly encoded 18:16:19 if url.startswith("/"): 18:16:19 url = to_str(_encode_target(url)) 18:16:19 else: 18:16:19 url = to_str(parsed_url.url) 18:16:19 18:16:19 conn = None 18:16:19 18:16:19 # Track whether `conn` needs to be released before 18:16:19 # returning/raising/recursing. Update this variable if necessary, and 18:16:19 # leave `release_conn` constant throughout the function. That way, if 18:16:19 # the function recurses, the original value of `release_conn` will be 18:16:19 # passed down into the recursive call, and its value will be respected. 18:16:19 # 18:16:19 # See issue #651 [1] for details. 18:16:19 # 18:16:19 # [1] 18:16:19 release_this_conn = release_conn 18:16:19 18:16:19 http_tunnel_required = connection_requires_http_tunnel( 18:16:19 self.proxy, self.proxy_config, destination_scheme 18:16:19 ) 18:16:19 18:16:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 18:16:19 # have to copy the headers dict so we can safely change it without those 18:16:19 # changes being reflected in anyone else's copy. 18:16:19 if not http_tunnel_required: 18:16:19 headers = headers.copy() # type: ignore[attr-defined] 18:16:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 18:16:19 18:16:19 # Must keep the exception bound to a separate variable or else Python 3 18:16:19 # complains about UnboundLocalError. 18:16:19 err = None 18:16:19 18:16:19 # Keep track of whether we cleanly exited the except block. This 18:16:19 # ensures we do proper cleanup in finally. 18:16:19 clean_exit = False 18:16:19 18:16:19 # Rewind body position, if needed. Record current position 18:16:19 # for future rewinds in the event of a redirect/retry. 18:16:19 body_pos = set_file_position(body, body_pos) 18:16:19 18:16:19 try: 18:16:19 # Request a connection from the queue. 18:16:19 timeout_obj = self._get_timeout(timeout) 18:16:19 conn = self._get_conn(timeout=pool_timeout) 18:16:19 18:16:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 18:16:19 18:16:19 # Is this a closed/new connection that requires CONNECT tunnelling? 18:16:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 18:16:19 try: 18:16:19 self._prepare_proxy(conn) 18:16:19 except (BaseSSLError, OSError, SocketTimeout) as e: 18:16:19 self._raise_timeout( 18:16:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 18:16:19 ) 18:16:19 raise 18:16:19 18:16:19 # If we're going to release the connection in ``finally:``, then 18:16:19 # the response doesn't need to know about the connection. Otherwise 18:16:19 # it will also try to release it and we'll have a double-release 18:16:19 # mess. 18:16:19 response_conn = conn if not release_conn else None 18:16:19 18:16:19 # Make the request on the HTTPConnection object 18:16:19 > response = self._make_request( 18:16:19 conn, 18:16:19 method, 18:16:19 url, 18:16:19 timeout=timeout_obj, 18:16:19 body=body, 18:16:19 headers=headers, 18:16:19 chunked=chunked, 18:16:19 retries=retries, 18:16:19 response_conn=response_conn, 18:16:19 preload_content=preload_content, 18:16:19 decode_content=decode_content, 18:16:19 **response_kw, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 18:16:19 conn.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 18:16:19 self.endheaders() 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 18:16:19 self._send_output(message_body, encode_chunked=encode_chunked) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 18:16:19 self.send(msg) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 18:16:19 self.connect() 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 18:16:19 self.sock = self._new_conn() 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 except socket.gaierror as e: 18:16:19 raise NameResolutionError(self.host, self, e) from e 18:16:19 except SocketTimeout as e: 18:16:19 raise ConnectTimeoutError( 18:16:19 self, 18:16:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 18:16:19 ) from e 18:16:19 18:16:19 except OSError as e: 18:16:19 > raise NewConnectionError( 18:16:19 self, f"Failed to establish a new connection: {e}" 18:16:19 ) from e 18:16:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 > resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 18:16:19 retries = retries.increment( 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 method = 'GET' 18:16:19 url = '/rests/data/ietf-network:networks/network=openroadm-network?content=config' 18:16:19 response = None 18:16:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 18:16:19 _pool = 18:16:19 _stacktrace = 18:16:19 18:16:19 def increment( 18:16:19 self, 18:16:19 method: str | None = None, 18:16:19 url: str | None = None, 18:16:19 response: BaseHTTPResponse | None = None, 18:16:19 error: Exception | None = None, 18:16:19 _pool: ConnectionPool | None = None, 18:16:19 _stacktrace: TracebackType | None = None, 18:16:19 ) -> Self: 18:16:19 """Return a new Retry object with incremented retry counters. 18:16:19 18:16:19 :param response: A response object, or None, if the server did not 18:16:19 return a response. 18:16:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 18:16:19 :param Exception error: An error encountered during the request, or 18:16:19 None if the response was received successfully. 18:16:19 18:16:19 :return: A new ``Retry`` object. 18:16:19 """ 18:16:19 if self.total is False and error: 18:16:19 # Disabled, indicate to re-raise the error. 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 18:16:19 total = self.total 18:16:19 if total is not None: 18:16:19 total -= 1 18:16:19 18:16:19 connect = self.connect 18:16:19 read = self.read 18:16:19 redirect = self.redirect 18:16:19 status_count = self.status 18:16:19 other = self.other 18:16:19 cause = "unknown" 18:16:19 status = None 18:16:19 redirect_location = None 18:16:19 18:16:19 if error and self._is_connection_error(error): 18:16:19 # Connect retry? 18:16:19 if connect is False: 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif connect is not None: 18:16:19 connect -= 1 18:16:19 18:16:19 elif error and self._is_read_error(error): 18:16:19 # Read retry? 18:16:19 if read is False or method is None or not self._is_method_retryable(method): 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif read is not None: 18:16:19 read -= 1 18:16:19 18:16:19 elif error: 18:16:19 # Other retry? 18:16:19 if other is not None: 18:16:19 other -= 1 18:16:19 18:16:19 elif response and response.get_redirect_location(): 18:16:19 # Redirect retry? 18:16:19 if redirect is not None: 18:16:19 redirect -= 1 18:16:19 cause = "too many redirects" 18:16:19 response_redirect_location = response.get_redirect_location() 18:16:19 if response_redirect_location: 18:16:19 redirect_location = response_redirect_location 18:16:19 status = response.status 18:16:19 18:16:19 else: 18:16:19 # Incrementing because of a server error like a 500 in 18:16:19 # status_forcelist and the given method is in the allowed_methods 18:16:19 cause = ResponseError.GENERIC_ERROR 18:16:19 if response and response.status: 18:16:19 if status_count is not None: 18:16:19 status_count -= 1 18:16:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 18:16:19 status = response.status 18:16:19 18:16:19 history = self.history + ( 18:16:19 RequestHistory(method, url, error, status, redirect_location), 18:16:19 ) 18:16:19 18:16:19 new_retry = self.new( 18:16:19 total=total, 18:16:19 connect=connect, 18:16:19 read=read, 18:16:19 redirect=redirect, 18:16:19 status=status_count, 18:16:19 other=other, 18:16:19 history=history, 18:16:19 ) 18:16:19 18:16:19 if new_retry.is_exhausted(): 18:16:19 reason = error or ResponseError(cause) 18:16:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 18:16:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-network?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 18:16:19 18:16:19 During handling of the above exception, another exception occurred: 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_37_getOpenRoadmNetwork(self): 18:16:19 > response = test_utils.get_ietf_network_request('openroadm-network', 'config') 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:706: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 transportpce_tests/common/test_utils.py:495: in get_ietf_network_request 18:16:19 response = get_request(url[RESTCONF_VERSION].format(*format_args)) 18:16:19 transportpce_tests/common/test_utils.py:116: in get_request 18:16:19 return requests.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 18:16:19 return session.request(method=method, url=url, **kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 18:16:19 resp = self.send(prep, **send_kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 18:16:19 r = adapter.send(request, **kwargs) 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 except (ProtocolError, OSError) as err: 18:16:19 raise ConnectionError(err, request=request) 18:16:19 18:16:19 except MaxRetryError as e: 18:16:19 if isinstance(e.reason, ConnectTimeoutError): 18:16:19 # TODO: Remove this in 3.0.0: see #2811 18:16:19 if not isinstance(e.reason, NewConnectionError): 18:16:19 raise ConnectTimeout(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, ResponseError): 18:16:19 raise RetryError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _ProxyError): 18:16:19 raise ProxyError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _SSLError): 18:16:19 # This branch is for urllib3 v1.22 and later. 18:16:19 raise SSLError(e, request=request) 18:16:19 18:16:19 > raise ConnectionError(e, request=request) 18:16:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-network?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 18:16:19 ________ TransportPCETopologyTesting.test_38_getNodes_OpenRoadmTopology ________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 > sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 18:16:19 raise err 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 address = ('localhost', 8182), timeout = 10, source_address = None 18:16:19 socket_options = [(6, 1, 1)] 18:16:19 18:16:19 def create_connection( 18:16:19 address: tuple[str, int], 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 source_address: tuple[str, int] | None = None, 18:16:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 18:16:19 ) -> socket.socket: 18:16:19 """Connect to *address* and return the socket object. 18:16:19 18:16:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 18:16:19 port)``) and return the socket object. Passing the optional 18:16:19 *timeout* parameter will set the timeout on the socket instance 18:16:19 before attempting to connect. If no *timeout* is supplied, the 18:16:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 18:16:19 is used. If *source_address* is set it must be a tuple of (host, port) 18:16:19 for the socket to bind as a source address before making the connection. 18:16:19 An host of '' or port 0 tells the OS to use the default. 18:16:19 """ 18:16:19 18:16:19 host, port = address 18:16:19 if host.startswith("["): 18:16:19 host = host.strip("[]") 18:16:19 err = None 18:16:19 18:16:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 18:16:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 18:16:19 # The original create_connection function always returns all records. 18:16:19 family = allowed_gai_family() 18:16:19 18:16:19 try: 18:16:19 host.encode("idna") 18:16:19 except UnicodeError: 18:16:19 raise LocationParseError(f"'{host}', label empty or too long") from None 18:16:19 18:16:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 18:16:19 af, socktype, proto, canonname, sa = res 18:16:19 sock = None 18:16:19 try: 18:16:19 sock = socket.socket(af, socktype, proto) 18:16:19 18:16:19 # If provided, set socket level options before connecting. 18:16:19 _set_socket_options(sock, socket_options) 18:16:19 18:16:19 if timeout is not _DEFAULT_TIMEOUT: 18:16:19 sock.settimeout(timeout) 18:16:19 if source_address: 18:16:19 sock.bind(source_address) 18:16:19 > sock.connect(sa) 18:16:19 E ConnectionRefusedError: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 method = 'GET' 18:16:19 url = '/rests/data/ietf-network:networks/network=openroadm-topology?content=config' 18:16:19 body = None 18:16:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 18:16:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 redirect = False, assert_same_host = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 18:16:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 18:16:19 decode_content = False, response_kw = {} 18:16:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/ietf-network:networks/network=openroadm-topology', query='content=config', fragment=None) 18:16:19 destination_scheme = None, conn = None, release_this_conn = True 18:16:19 http_tunnel_required = False, err = None, clean_exit = False 18:16:19 18:16:19 def urlopen( # type: ignore[override] 18:16:19 self, 18:16:19 method: str, 18:16:19 url: str, 18:16:19 body: _TYPE_BODY | None = None, 18:16:19 headers: typing.Mapping[str, str] | None = None, 18:16:19 retries: Retry | bool | int | None = None, 18:16:19 redirect: bool = True, 18:16:19 assert_same_host: bool = True, 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 pool_timeout: int | None = None, 18:16:19 release_conn: bool | None = None, 18:16:19 chunked: bool = False, 18:16:19 body_pos: _TYPE_BODY_POSITION | None = None, 18:16:19 preload_content: bool = True, 18:16:19 decode_content: bool = True, 18:16:19 **response_kw: typing.Any, 18:16:19 ) -> BaseHTTPResponse: 18:16:19 """ 18:16:19 Get a connection from the pool and perform an HTTP request. This is the 18:16:19 lowest level call for making a request, so you'll need to specify all 18:16:19 the raw details. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 More commonly, it's appropriate to use a convenience method 18:16:19 such as :meth:`request`. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 `release_conn` will only behave as expected if 18:16:19 `preload_content=False` because we want to make 18:16:19 `preload_content=False` the default behaviour someday soon without 18:16:19 breaking backwards compatibility. 18:16:19 18:16:19 :param method: 18:16:19 HTTP request method (such as GET, POST, PUT, etc.) 18:16:19 18:16:19 :param url: 18:16:19 The URL to perform the request on. 18:16:19 18:16:19 :param body: 18:16:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 18:16:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 18:16:19 18:16:19 :param headers: 18:16:19 Dictionary of custom headers to send, such as User-Agent, 18:16:19 If-None-Match, etc. If None, pool headers are used. If provided, 18:16:19 these headers completely replace any pool-specific headers. 18:16:19 18:16:19 :param retries: 18:16:19 Configure the number of retries to allow before raising a 18:16:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 18:16:19 18:16:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 18:16:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 18:16:19 over different types of retries. 18:16:19 Pass an integer number to retry connection errors that many times, 18:16:19 but no other types of errors. Pass zero to never retry. 18:16:19 18:16:19 If ``False``, then retries are disabled and any exception is raised 18:16:19 immediately. Also, instead of raising a MaxRetryError on redirects, 18:16:19 the redirect response will be returned. 18:16:19 18:16:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 18:16:19 18:16:19 :param redirect: 18:16:19 If True, automatically handle redirects (status codes 301, 302, 18:16:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 18:16:19 will disable redirect, too. 18:16:19 18:16:19 :param assert_same_host: 18:16:19 If ``True``, will make sure that the host of the pool requests is 18:16:19 consistent else will raise HostChangedError. When ``False``, you can 18:16:19 use the pool on an HTTP proxy and request foreign hosts. 18:16:19 18:16:19 :param timeout: 18:16:19 If specified, overrides the default timeout for this one 18:16:19 request. It may be a float (in seconds) or an instance of 18:16:19 :class:`urllib3.util.Timeout`. 18:16:19 18:16:19 :param pool_timeout: 18:16:19 If set and the pool is set to block=True, then this method will 18:16:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 18:16:19 connection is available within the time period. 18:16:19 18:16:19 :param bool preload_content: 18:16:19 If True, the response's body will be preloaded into memory. 18:16:19 18:16:19 :param bool decode_content: 18:16:19 If True, will attempt to decode the body based on the 18:16:19 'content-encoding' header. 18:16:19 18:16:19 :param release_conn: 18:16:19 If False, then the urlopen call will not release the connection 18:16:19 back into the pool once a response is received (but will release if 18:16:19 you read the entire contents of the response such as when 18:16:19 `preload_content=True`). This is useful if you're not preloading 18:16:19 the response's content immediately. You will need to call 18:16:19 ``r.release_conn()`` on the response ``r`` to return the connection 18:16:19 back into the pool. If None, it takes the value of ``preload_content`` 18:16:19 which defaults to ``True``. 18:16:19 18:16:19 :param bool chunked: 18:16:19 If True, urllib3 will send the body using chunked transfer 18:16:19 encoding. Otherwise, urllib3 will send the body using the standard 18:16:19 content-length form. Defaults to False. 18:16:19 18:16:19 :param int body_pos: 18:16:19 Position to seek to in file-like body in the event of a retry or 18:16:19 redirect. Typically this won't need to be set because urllib3 will 18:16:19 auto-populate the value when needed. 18:16:19 """ 18:16:19 parsed_url = parse_url(url) 18:16:19 destination_scheme = parsed_url.scheme 18:16:19 18:16:19 if headers is None: 18:16:19 headers = self.headers 18:16:19 18:16:19 if not isinstance(retries, Retry): 18:16:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 18:16:19 18:16:19 if release_conn is None: 18:16:19 release_conn = preload_content 18:16:19 18:16:19 # Check host 18:16:19 if assert_same_host and not self.is_same_host(url): 18:16:19 raise HostChangedError(self, url, retries) 18:16:19 18:16:19 # Ensure that the URL we're connecting to is properly encoded 18:16:19 if url.startswith("/"): 18:16:19 url = to_str(_encode_target(url)) 18:16:19 else: 18:16:19 url = to_str(parsed_url.url) 18:16:19 18:16:19 conn = None 18:16:19 18:16:19 # Track whether `conn` needs to be released before 18:16:19 # returning/raising/recursing. Update this variable if necessary, and 18:16:19 # leave `release_conn` constant throughout the function. That way, if 18:16:19 # the function recurses, the original value of `release_conn` will be 18:16:19 # passed down into the recursive call, and its value will be respected. 18:16:19 # 18:16:19 # See issue #651 [1] for details. 18:16:19 # 18:16:19 # [1] 18:16:19 release_this_conn = release_conn 18:16:19 18:16:19 http_tunnel_required = connection_requires_http_tunnel( 18:16:19 self.proxy, self.proxy_config, destination_scheme 18:16:19 ) 18:16:19 18:16:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 18:16:19 # have to copy the headers dict so we can safely change it without those 18:16:19 # changes being reflected in anyone else's copy. 18:16:19 if not http_tunnel_required: 18:16:19 headers = headers.copy() # type: ignore[attr-defined] 18:16:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 18:16:19 18:16:19 # Must keep the exception bound to a separate variable or else Python 3 18:16:19 # complains about UnboundLocalError. 18:16:19 err = None 18:16:19 18:16:19 # Keep track of whether we cleanly exited the except block. This 18:16:19 # ensures we do proper cleanup in finally. 18:16:19 clean_exit = False 18:16:19 18:16:19 # Rewind body position, if needed. Record current position 18:16:19 # for future rewinds in the event of a redirect/retry. 18:16:19 body_pos = set_file_position(body, body_pos) 18:16:19 18:16:19 try: 18:16:19 # Request a connection from the queue. 18:16:19 timeout_obj = self._get_timeout(timeout) 18:16:19 conn = self._get_conn(timeout=pool_timeout) 18:16:19 18:16:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 18:16:19 18:16:19 # Is this a closed/new connection that requires CONNECT tunnelling? 18:16:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 18:16:19 try: 18:16:19 self._prepare_proxy(conn) 18:16:19 except (BaseSSLError, OSError, SocketTimeout) as e: 18:16:19 self._raise_timeout( 18:16:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 18:16:19 ) 18:16:19 raise 18:16:19 18:16:19 # If we're going to release the connection in ``finally:``, then 18:16:19 # the response doesn't need to know about the connection. Otherwise 18:16:19 # it will also try to release it and we'll have a double-release 18:16:19 # mess. 18:16:19 response_conn = conn if not release_conn else None 18:16:19 18:16:19 # Make the request on the HTTPConnection object 18:16:19 > response = self._make_request( 18:16:19 conn, 18:16:19 method, 18:16:19 url, 18:16:19 timeout=timeout_obj, 18:16:19 body=body, 18:16:19 headers=headers, 18:16:19 chunked=chunked, 18:16:19 retries=retries, 18:16:19 response_conn=response_conn, 18:16:19 preload_content=preload_content, 18:16:19 decode_content=decode_content, 18:16:19 **response_kw, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 18:16:19 conn.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 18:16:19 self.endheaders() 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 18:16:19 self._send_output(message_body, encode_chunked=encode_chunked) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 18:16:19 self.send(msg) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 18:16:19 self.connect() 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 18:16:19 self.sock = self._new_conn() 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 except socket.gaierror as e: 18:16:19 raise NameResolutionError(self.host, self, e) from e 18:16:19 except SocketTimeout as e: 18:16:19 raise ConnectTimeoutError( 18:16:19 self, 18:16:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 18:16:19 ) from e 18:16:19 18:16:19 except OSError as e: 18:16:19 > raise NewConnectionError( 18:16:19 self, f"Failed to establish a new connection: {e}" 18:16:19 ) from e 18:16:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 > resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 18:16:19 retries = retries.increment( 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 method = 'GET' 18:16:19 url = '/rests/data/ietf-network:networks/network=openroadm-topology?content=config' 18:16:19 response = None 18:16:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 18:16:19 _pool = 18:16:19 _stacktrace = 18:16:19 18:16:19 def increment( 18:16:19 self, 18:16:19 method: str | None = None, 18:16:19 url: str | None = None, 18:16:19 response: BaseHTTPResponse | None = None, 18:16:19 error: Exception | None = None, 18:16:19 _pool: ConnectionPool | None = None, 18:16:19 _stacktrace: TracebackType | None = None, 18:16:19 ) -> Self: 18:16:19 """Return a new Retry object with incremented retry counters. 18:16:19 18:16:19 :param response: A response object, or None, if the server did not 18:16:19 return a response. 18:16:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 18:16:19 :param Exception error: An error encountered during the request, or 18:16:19 None if the response was received successfully. 18:16:19 18:16:19 :return: A new ``Retry`` object. 18:16:19 """ 18:16:19 if self.total is False and error: 18:16:19 # Disabled, indicate to re-raise the error. 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 18:16:19 total = self.total 18:16:19 if total is not None: 18:16:19 total -= 1 18:16:19 18:16:19 connect = self.connect 18:16:19 read = self.read 18:16:19 redirect = self.redirect 18:16:19 status_count = self.status 18:16:19 other = self.other 18:16:19 cause = "unknown" 18:16:19 status = None 18:16:19 redirect_location = None 18:16:19 18:16:19 if error and self._is_connection_error(error): 18:16:19 # Connect retry? 18:16:19 if connect is False: 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif connect is not None: 18:16:19 connect -= 1 18:16:19 18:16:19 elif error and self._is_read_error(error): 18:16:19 # Read retry? 18:16:19 if read is False or method is None or not self._is_method_retryable(method): 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif read is not None: 18:16:19 read -= 1 18:16:19 18:16:19 elif error: 18:16:19 # Other retry? 18:16:19 if other is not None: 18:16:19 other -= 1 18:16:19 18:16:19 elif response and response.get_redirect_location(): 18:16:19 # Redirect retry? 18:16:19 if redirect is not None: 18:16:19 redirect -= 1 18:16:19 cause = "too many redirects" 18:16:19 response_redirect_location = response.get_redirect_location() 18:16:19 if response_redirect_location: 18:16:19 redirect_location = response_redirect_location 18:16:19 status = response.status 18:16:19 18:16:19 else: 18:16:19 # Incrementing because of a server error like a 500 in 18:16:19 # status_forcelist and the given method is in the allowed_methods 18:16:19 cause = ResponseError.GENERIC_ERROR 18:16:19 if response and response.status: 18:16:19 if status_count is not None: 18:16:19 status_count -= 1 18:16:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 18:16:19 status = response.status 18:16:19 18:16:19 history = self.history + ( 18:16:19 RequestHistory(method, url, error, status, redirect_location), 18:16:19 ) 18:16:19 18:16:19 new_retry = self.new( 18:16:19 total=total, 18:16:19 connect=connect, 18:16:19 read=read, 18:16:19 redirect=redirect, 18:16:19 status=status_count, 18:16:19 other=other, 18:16:19 history=history, 18:16:19 ) 18:16:19 18:16:19 if new_retry.is_exhausted(): 18:16:19 reason = error or ResponseError(cause) 18:16:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 18:16:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-topology?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 18:16:19 18:16:19 During handling of the above exception, another exception occurred: 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_38_getNodes_OpenRoadmTopology(self): 18:16:19 > response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:712: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 transportpce_tests/common/test_utils.py:495: in get_ietf_network_request 18:16:19 response = get_request(url[RESTCONF_VERSION].format(*format_args)) 18:16:19 transportpce_tests/common/test_utils.py:116: in get_request 18:16:19 return requests.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 18:16:19 return session.request(method=method, url=url, **kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 18:16:19 resp = self.send(prep, **send_kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 18:16:19 r = adapter.send(request, **kwargs) 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 except (ProtocolError, OSError) as err: 18:16:19 raise ConnectionError(err, request=request) 18:16:19 18:16:19 except MaxRetryError as e: 18:16:19 if isinstance(e.reason, ConnectTimeoutError): 18:16:19 # TODO: Remove this in 3.0.0: see #2811 18:16:19 if not isinstance(e.reason, NewConnectionError): 18:16:19 raise ConnectTimeout(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, ResponseError): 18:16:19 raise RetryError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _ProxyError): 18:16:19 raise ProxyError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _SSLError): 18:16:19 # This branch is for urllib3 v1.22 and later. 18:16:19 raise SSLError(e, request=request) 18:16:19 18:16:19 > raise ConnectionError(e, request=request) 18:16:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-topology?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 18:16:19 _______ TransportPCETopologyTesting.test_39_disconnect_ROADM_XPDRA_link ________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 > sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 18:16:19 raise err 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 address = ('localhost', 8182), timeout = 10, source_address = None 18:16:19 socket_options = [(6, 1, 1)] 18:16:19 18:16:19 def create_connection( 18:16:19 address: tuple[str, int], 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 source_address: tuple[str, int] | None = None, 18:16:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 18:16:19 ) -> socket.socket: 18:16:19 """Connect to *address* and return the socket object. 18:16:19 18:16:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 18:16:19 port)``) and return the socket object. Passing the optional 18:16:19 *timeout* parameter will set the timeout on the socket instance 18:16:19 before attempting to connect. If no *timeout* is supplied, the 18:16:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 18:16:19 is used. If *source_address* is set it must be a tuple of (host, port) 18:16:19 for the socket to bind as a source address before making the connection. 18:16:19 An host of '' or port 0 tells the OS to use the default. 18:16:19 """ 18:16:19 18:16:19 host, port = address 18:16:19 if host.startswith("["): 18:16:19 host = host.strip("[]") 18:16:19 err = None 18:16:19 18:16:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 18:16:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 18:16:19 # The original create_connection function always returns all records. 18:16:19 family = allowed_gai_family() 18:16:19 18:16:19 try: 18:16:19 host.encode("idna") 18:16:19 except UnicodeError: 18:16:19 raise LocationParseError(f"'{host}', label empty or too long") from None 18:16:19 18:16:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 18:16:19 af, socktype, proto, canonname, sa = res 18:16:19 sock = None 18:16:19 try: 18:16:19 sock = socket.socket(af, socktype, proto) 18:16:19 18:16:19 # If provided, set socket level options before connecting. 18:16:19 _set_socket_options(sock, socket_options) 18:16:19 18:16:19 if timeout is not _DEFAULT_TIMEOUT: 18:16:19 sock.settimeout(timeout) 18:16:19 if source_address: 18:16:19 sock.bind(source_address) 18:16:19 > sock.connect(sa) 18:16:19 E ConnectionRefusedError: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 method = 'DELETE' 18:16:19 url = '/rests/data/ietf-network:networks/network=openroadm-topology/ietf-network-topology:link=XPDRA01-XPDR1-XPDR1-NETWORK1toROADMA01-SRG1-SRG1-PP1-TXRX?content=config' 18:16:19 body = None 18:16:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 18:16:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 redirect = False, assert_same_host = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 18:16:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 18:16:19 decode_content = False, response_kw = {} 18:16:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/ietf-network:networks/network=openroadm-topology/i...etwork-topology:link=XPDRA01-XPDR1-XPDR1-NETWORK1toROADMA01-SRG1-SRG1-PP1-TXRX', query='content=config', fragment=None) 18:16:19 destination_scheme = None, conn = None, release_this_conn = True 18:16:19 http_tunnel_required = False, err = None, clean_exit = False 18:16:19 18:16:19 def urlopen( # type: ignore[override] 18:16:19 self, 18:16:19 method: str, 18:16:19 url: str, 18:16:19 body: _TYPE_BODY | None = None, 18:16:19 headers: typing.Mapping[str, str] | None = None, 18:16:19 retries: Retry | bool | int | None = None, 18:16:19 redirect: bool = True, 18:16:19 assert_same_host: bool = True, 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 pool_timeout: int | None = None, 18:16:19 release_conn: bool | None = None, 18:16:19 chunked: bool = False, 18:16:19 body_pos: _TYPE_BODY_POSITION | None = None, 18:16:19 preload_content: bool = True, 18:16:19 decode_content: bool = True, 18:16:19 **response_kw: typing.Any, 18:16:19 ) -> BaseHTTPResponse: 18:16:19 """ 18:16:19 Get a connection from the pool and perform an HTTP request. This is the 18:16:19 lowest level call for making a request, so you'll need to specify all 18:16:19 the raw details. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 More commonly, it's appropriate to use a convenience method 18:16:19 such as :meth:`request`. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 `release_conn` will only behave as expected if 18:16:19 `preload_content=False` because we want to make 18:16:19 `preload_content=False` the default behaviour someday soon without 18:16:19 breaking backwards compatibility. 18:16:19 18:16:19 :param method: 18:16:19 HTTP request method (such as GET, POST, PUT, etc.) 18:16:19 18:16:19 :param url: 18:16:19 The URL to perform the request on. 18:16:19 18:16:19 :param body: 18:16:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 18:16:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 18:16:19 18:16:19 :param headers: 18:16:19 Dictionary of custom headers to send, such as User-Agent, 18:16:19 If-None-Match, etc. If None, pool headers are used. If provided, 18:16:19 these headers completely replace any pool-specific headers. 18:16:19 18:16:19 :param retries: 18:16:19 Configure the number of retries to allow before raising a 18:16:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 18:16:19 18:16:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 18:16:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 18:16:19 over different types of retries. 18:16:19 Pass an integer number to retry connection errors that many times, 18:16:19 but no other types of errors. Pass zero to never retry. 18:16:19 18:16:19 If ``False``, then retries are disabled and any exception is raised 18:16:19 immediately. Also, instead of raising a MaxRetryError on redirects, 18:16:19 the redirect response will be returned. 18:16:19 18:16:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 18:16:19 18:16:19 :param redirect: 18:16:19 If True, automatically handle redirects (status codes 301, 302, 18:16:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 18:16:19 will disable redirect, too. 18:16:19 18:16:19 :param assert_same_host: 18:16:19 If ``True``, will make sure that the host of the pool requests is 18:16:19 consistent else will raise HostChangedError. When ``False``, you can 18:16:19 use the pool on an HTTP proxy and request foreign hosts. 18:16:19 18:16:19 :param timeout: 18:16:19 If specified, overrides the default timeout for this one 18:16:19 request. It may be a float (in seconds) or an instance of 18:16:19 :class:`urllib3.util.Timeout`. 18:16:19 18:16:19 :param pool_timeout: 18:16:19 If set and the pool is set to block=True, then this method will 18:16:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 18:16:19 connection is available within the time period. 18:16:19 18:16:19 :param bool preload_content: 18:16:19 If True, the response's body will be preloaded into memory. 18:16:19 18:16:19 :param bool decode_content: 18:16:19 If True, will attempt to decode the body based on the 18:16:19 'content-encoding' header. 18:16:19 18:16:19 :param release_conn: 18:16:19 If False, then the urlopen call will not release the connection 18:16:19 back into the pool once a response is received (but will release if 18:16:19 you read the entire contents of the response such as when 18:16:19 `preload_content=True`). This is useful if you're not preloading 18:16:19 the response's content immediately. You will need to call 18:16:19 ``r.release_conn()`` on the response ``r`` to return the connection 18:16:19 back into the pool. If None, it takes the value of ``preload_content`` 18:16:19 which defaults to ``True``. 18:16:19 18:16:19 :param bool chunked: 18:16:19 If True, urllib3 will send the body using chunked transfer 18:16:19 encoding. Otherwise, urllib3 will send the body using the standard 18:16:19 content-length form. Defaults to False. 18:16:19 18:16:19 :param int body_pos: 18:16:19 Position to seek to in file-like body in the event of a retry or 18:16:19 redirect. Typically this won't need to be set because urllib3 will 18:16:19 auto-populate the value when needed. 18:16:19 """ 18:16:19 parsed_url = parse_url(url) 18:16:19 destination_scheme = parsed_url.scheme 18:16:19 18:16:19 if headers is None: 18:16:19 headers = self.headers 18:16:19 18:16:19 if not isinstance(retries, Retry): 18:16:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 18:16:19 18:16:19 if release_conn is None: 18:16:19 release_conn = preload_content 18:16:19 18:16:19 # Check host 18:16:19 if assert_same_host and not self.is_same_host(url): 18:16:19 raise HostChangedError(self, url, retries) 18:16:19 18:16:19 # Ensure that the URL we're connecting to is properly encoded 18:16:19 if url.startswith("/"): 18:16:19 url = to_str(_encode_target(url)) 18:16:19 else: 18:16:19 url = to_str(parsed_url.url) 18:16:19 18:16:19 conn = None 18:16:19 18:16:19 # Track whether `conn` needs to be released before 18:16:19 # returning/raising/recursing. Update this variable if necessary, and 18:16:19 # leave `release_conn` constant throughout the function. That way, if 18:16:19 # the function recurses, the original value of `release_conn` will be 18:16:19 # passed down into the recursive call, and its value will be respected. 18:16:19 # 18:16:19 # See issue #651 [1] for details. 18:16:19 # 18:16:19 # [1] 18:16:19 release_this_conn = release_conn 18:16:19 18:16:19 http_tunnel_required = connection_requires_http_tunnel( 18:16:19 self.proxy, self.proxy_config, destination_scheme 18:16:19 ) 18:16:19 18:16:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 18:16:19 # have to copy the headers dict so we can safely change it without those 18:16:19 # changes being reflected in anyone else's copy. 18:16:19 if not http_tunnel_required: 18:16:19 headers = headers.copy() # type: ignore[attr-defined] 18:16:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 18:16:19 18:16:19 # Must keep the exception bound to a separate variable or else Python 3 18:16:19 # complains about UnboundLocalError. 18:16:19 err = None 18:16:19 18:16:19 # Keep track of whether we cleanly exited the except block. This 18:16:19 # ensures we do proper cleanup in finally. 18:16:19 clean_exit = False 18:16:19 18:16:19 # Rewind body position, if needed. Record current position 18:16:19 # for future rewinds in the event of a redirect/retry. 18:16:19 body_pos = set_file_position(body, body_pos) 18:16:19 18:16:19 try: 18:16:19 # Request a connection from the queue. 18:16:19 timeout_obj = self._get_timeout(timeout) 18:16:19 conn = self._get_conn(timeout=pool_timeout) 18:16:19 18:16:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 18:16:19 18:16:19 # Is this a closed/new connection that requires CONNECT tunnelling? 18:16:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 18:16:19 try: 18:16:19 self._prepare_proxy(conn) 18:16:19 except (BaseSSLError, OSError, SocketTimeout) as e: 18:16:19 self._raise_timeout( 18:16:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 18:16:19 ) 18:16:19 raise 18:16:19 18:16:19 # If we're going to release the connection in ``finally:``, then 18:16:19 # the response doesn't need to know about the connection. Otherwise 18:16:19 # it will also try to release it and we'll have a double-release 18:16:19 # mess. 18:16:19 response_conn = conn if not release_conn else None 18:16:19 18:16:19 # Make the request on the HTTPConnection object 18:16:19 > response = self._make_request( 18:16:19 conn, 18:16:19 method, 18:16:19 url, 18:16:19 timeout=timeout_obj, 18:16:19 body=body, 18:16:19 headers=headers, 18:16:19 chunked=chunked, 18:16:19 retries=retries, 18:16:19 response_conn=response_conn, 18:16:19 preload_content=preload_content, 18:16:19 decode_content=decode_content, 18:16:19 **response_kw, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 18:16:19 conn.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 18:16:19 self.endheaders() 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 18:16:19 self._send_output(message_body, encode_chunked=encode_chunked) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 18:16:19 self.send(msg) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 18:16:19 self.connect() 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 18:16:19 self.sock = self._new_conn() 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 except socket.gaierror as e: 18:16:19 raise NameResolutionError(self.host, self, e) from e 18:16:19 except SocketTimeout as e: 18:16:19 raise ConnectTimeoutError( 18:16:19 self, 18:16:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 18:16:19 ) from e 18:16:19 18:16:19 except OSError as e: 18:16:19 > raise NewConnectionError( 18:16:19 self, f"Failed to establish a new connection: {e}" 18:16:19 ) from e 18:16:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 > resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 18:16:19 retries = retries.increment( 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 method = 'DELETE' 18:16:19 url = '/rests/data/ietf-network:networks/network=openroadm-topology/ietf-network-topology:link=XPDRA01-XPDR1-XPDR1-NETWORK1toROADMA01-SRG1-SRG1-PP1-TXRX?content=config' 18:16:19 response = None 18:16:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 18:16:19 _pool = 18:16:19 _stacktrace = 18:16:19 18:16:19 def increment( 18:16:19 self, 18:16:19 method: str | None = None, 18:16:19 url: str | None = None, 18:16:19 response: BaseHTTPResponse | None = None, 18:16:19 error: Exception | None = None, 18:16:19 _pool: ConnectionPool | None = None, 18:16:19 _stacktrace: TracebackType | None = None, 18:16:19 ) -> Self: 18:16:19 """Return a new Retry object with incremented retry counters. 18:16:19 18:16:19 :param response: A response object, or None, if the server did not 18:16:19 return a response. 18:16:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 18:16:19 :param Exception error: An error encountered during the request, or 18:16:19 None if the response was received successfully. 18:16:19 18:16:19 :return: A new ``Retry`` object. 18:16:19 """ 18:16:19 if self.total is False and error: 18:16:19 # Disabled, indicate to re-raise the error. 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 18:16:19 total = self.total 18:16:19 if total is not None: 18:16:19 total -= 1 18:16:19 18:16:19 connect = self.connect 18:16:19 read = self.read 18:16:19 redirect = self.redirect 18:16:19 status_count = self.status 18:16:19 other = self.other 18:16:19 cause = "unknown" 18:16:19 status = None 18:16:19 redirect_location = None 18:16:19 18:16:19 if error and self._is_connection_error(error): 18:16:19 # Connect retry? 18:16:19 if connect is False: 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif connect is not None: 18:16:19 connect -= 1 18:16:19 18:16:19 elif error and self._is_read_error(error): 18:16:19 # Read retry? 18:16:19 if read is False or method is None or not self._is_method_retryable(method): 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif read is not None: 18:16:19 read -= 1 18:16:19 18:16:19 elif error: 18:16:19 # Other retry? 18:16:19 if other is not None: 18:16:19 other -= 1 18:16:19 18:16:19 elif response and response.get_redirect_location(): 18:16:19 # Redirect retry? 18:16:19 if redirect is not None: 18:16:19 redirect -= 1 18:16:19 cause = "too many redirects" 18:16:19 response_redirect_location = response.get_redirect_location() 18:16:19 if response_redirect_location: 18:16:19 redirect_location = response_redirect_location 18:16:19 status = response.status 18:16:19 18:16:19 else: 18:16:19 # Incrementing because of a server error like a 500 in 18:16:19 # status_forcelist and the given method is in the allowed_methods 18:16:19 cause = ResponseError.GENERIC_ERROR 18:16:19 if response and response.status: 18:16:19 if status_count is not None: 18:16:19 status_count -= 1 18:16:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 18:16:19 status = response.status 18:16:19 18:16:19 history = self.history + ( 18:16:19 RequestHistory(method, url, error, status, redirect_location), 18:16:19 ) 18:16:19 18:16:19 new_retry = self.new( 18:16:19 total=total, 18:16:19 connect=connect, 18:16:19 read=read, 18:16:19 redirect=redirect, 18:16:19 status=status_count, 18:16:19 other=other, 18:16:19 history=history, 18:16:19 ) 18:16:19 18:16:19 if new_retry.is_exhausted(): 18:16:19 reason = error or ResponseError(cause) 18:16:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 18:16:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-topology/ietf-network-topology:link=XPDRA01-XPDR1-XPDR1-NETWORK1toROADMA01-SRG1-SRG1-PP1-TXRX?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 18:16:19 18:16:19 During handling of the above exception, another exception occurred: 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_39_disconnect_ROADM_XPDRA_link(self): 18:16:19 # Link-1 18:16:19 > response = test_utils.del_ietf_network_link_request( 18:16:19 'openroadm-topology', 18:16:19 'XPDRA01-XPDR1-XPDR1-NETWORK1toROADMA01-SRG1-SRG1-PP1-TXRX', 18:16:19 'config') 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:736: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 transportpce_tests/common/test_utils.py:549: in del_ietf_network_link_request 18:16:19 response = delete_request(url[RESTCONF_VERSION].format(*format_args)) 18:16:19 transportpce_tests/common/test_utils.py:133: in delete_request 18:16:19 return requests.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 18:16:19 return session.request(method=method, url=url, **kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 18:16:19 resp = self.send(prep, **send_kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 18:16:19 r = adapter.send(request, **kwargs) 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 except (ProtocolError, OSError) as err: 18:16:19 raise ConnectionError(err, request=request) 18:16:19 18:16:19 except MaxRetryError as e: 18:16:19 if isinstance(e.reason, ConnectTimeoutError): 18:16:19 # TODO: Remove this in 3.0.0: see #2811 18:16:19 if not isinstance(e.reason, NewConnectionError): 18:16:19 raise ConnectTimeout(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, ResponseError): 18:16:19 raise RetryError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _ProxyError): 18:16:19 raise ProxyError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _SSLError): 18:16:19 # This branch is for urllib3 v1.22 and later. 18:16:19 raise SSLError(e, request=request) 18:16:19 18:16:19 > raise ConnectionError(e, request=request) 18:16:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-topology/ietf-network-topology:link=XPDRA01-XPDR1-XPDR1-NETWORK1toROADMA01-SRG1-SRG1-PP1-TXRX?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 18:16:19 ________ TransportPCETopologyTesting.test_40_getLinks_OpenRoadmTopology ________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 > sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 18:16:19 raise err 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 address = ('localhost', 8182), timeout = 10, source_address = None 18:16:19 socket_options = [(6, 1, 1)] 18:16:19 18:16:19 def create_connection( 18:16:19 address: tuple[str, int], 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 source_address: tuple[str, int] | None = None, 18:16:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 18:16:19 ) -> socket.socket: 18:16:19 """Connect to *address* and return the socket object. 18:16:19 18:16:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 18:16:19 port)``) and return the socket object. Passing the optional 18:16:19 *timeout* parameter will set the timeout on the socket instance 18:16:19 before attempting to connect. If no *timeout* is supplied, the 18:16:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 18:16:19 is used. If *source_address* is set it must be a tuple of (host, port) 18:16:19 for the socket to bind as a source address before making the connection. 18:16:19 An host of '' or port 0 tells the OS to use the default. 18:16:19 """ 18:16:19 18:16:19 host, port = address 18:16:19 if host.startswith("["): 18:16:19 host = host.strip("[]") 18:16:19 err = None 18:16:19 18:16:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 18:16:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 18:16:19 # The original create_connection function always returns all records. 18:16:19 family = allowed_gai_family() 18:16:19 18:16:19 try: 18:16:19 host.encode("idna") 18:16:19 except UnicodeError: 18:16:19 raise LocationParseError(f"'{host}', label empty or too long") from None 18:16:19 18:16:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 18:16:19 af, socktype, proto, canonname, sa = res 18:16:19 sock = None 18:16:19 try: 18:16:19 sock = socket.socket(af, socktype, proto) 18:16:19 18:16:19 # If provided, set socket level options before connecting. 18:16:19 _set_socket_options(sock, socket_options) 18:16:19 18:16:19 if timeout is not _DEFAULT_TIMEOUT: 18:16:19 sock.settimeout(timeout) 18:16:19 if source_address: 18:16:19 sock.bind(source_address) 18:16:19 > sock.connect(sa) 18:16:19 E ConnectionRefusedError: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 method = 'GET' 18:16:19 url = '/rests/data/ietf-network:networks/network=openroadm-topology?content=config' 18:16:19 body = None 18:16:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 18:16:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 redirect = False, assert_same_host = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 18:16:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 18:16:19 decode_content = False, response_kw = {} 18:16:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/ietf-network:networks/network=openroadm-topology', query='content=config', fragment=None) 18:16:19 destination_scheme = None, conn = None, release_this_conn = True 18:16:19 http_tunnel_required = False, err = None, clean_exit = False 18:16:19 18:16:19 def urlopen( # type: ignore[override] 18:16:19 self, 18:16:19 method: str, 18:16:19 url: str, 18:16:19 body: _TYPE_BODY | None = None, 18:16:19 headers: typing.Mapping[str, str] | None = None, 18:16:19 retries: Retry | bool | int | None = None, 18:16:19 redirect: bool = True, 18:16:19 assert_same_host: bool = True, 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 pool_timeout: int | None = None, 18:16:19 release_conn: bool | None = None, 18:16:19 chunked: bool = False, 18:16:19 body_pos: _TYPE_BODY_POSITION | None = None, 18:16:19 preload_content: bool = True, 18:16:19 decode_content: bool = True, 18:16:19 **response_kw: typing.Any, 18:16:19 ) -> BaseHTTPResponse: 18:16:19 """ 18:16:19 Get a connection from the pool and perform an HTTP request. This is the 18:16:19 lowest level call for making a request, so you'll need to specify all 18:16:19 the raw details. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 More commonly, it's appropriate to use a convenience method 18:16:19 such as :meth:`request`. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 `release_conn` will only behave as expected if 18:16:19 `preload_content=False` because we want to make 18:16:19 `preload_content=False` the default behaviour someday soon without 18:16:19 breaking backwards compatibility. 18:16:19 18:16:19 :param method: 18:16:19 HTTP request method (such as GET, POST, PUT, etc.) 18:16:19 18:16:19 :param url: 18:16:19 The URL to perform the request on. 18:16:19 18:16:19 :param body: 18:16:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 18:16:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 18:16:19 18:16:19 :param headers: 18:16:19 Dictionary of custom headers to send, such as User-Agent, 18:16:19 If-None-Match, etc. If None, pool headers are used. If provided, 18:16:19 these headers completely replace any pool-specific headers. 18:16:19 18:16:19 :param retries: 18:16:19 Configure the number of retries to allow before raising a 18:16:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 18:16:19 18:16:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 18:16:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 18:16:19 over different types of retries. 18:16:19 Pass an integer number to retry connection errors that many times, 18:16:19 but no other types of errors. Pass zero to never retry. 18:16:19 18:16:19 If ``False``, then retries are disabled and any exception is raised 18:16:19 immediately. Also, instead of raising a MaxRetryError on redirects, 18:16:19 the redirect response will be returned. 18:16:19 18:16:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 18:16:19 18:16:19 :param redirect: 18:16:19 If True, automatically handle redirects (status codes 301, 302, 18:16:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 18:16:19 will disable redirect, too. 18:16:19 18:16:19 :param assert_same_host: 18:16:19 If ``True``, will make sure that the host of the pool requests is 18:16:19 consistent else will raise HostChangedError. When ``False``, you can 18:16:19 use the pool on an HTTP proxy and request foreign hosts. 18:16:19 18:16:19 :param timeout: 18:16:19 If specified, overrides the default timeout for this one 18:16:19 request. It may be a float (in seconds) or an instance of 18:16:19 :class:`urllib3.util.Timeout`. 18:16:19 18:16:19 :param pool_timeout: 18:16:19 If set and the pool is set to block=True, then this method will 18:16:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 18:16:19 connection is available within the time period. 18:16:19 18:16:19 :param bool preload_content: 18:16:19 If True, the response's body will be preloaded into memory. 18:16:19 18:16:19 :param bool decode_content: 18:16:19 If True, will attempt to decode the body based on the 18:16:19 'content-encoding' header. 18:16:19 18:16:19 :param release_conn: 18:16:19 If False, then the urlopen call will not release the connection 18:16:19 back into the pool once a response is received (but will release if 18:16:19 you read the entire contents of the response such as when 18:16:19 `preload_content=True`). This is useful if you're not preloading 18:16:19 the response's content immediately. You will need to call 18:16:19 ``r.release_conn()`` on the response ``r`` to return the connection 18:16:19 back into the pool. If None, it takes the value of ``preload_content`` 18:16:19 which defaults to ``True``. 18:16:19 18:16:19 :param bool chunked: 18:16:19 If True, urllib3 will send the body using chunked transfer 18:16:19 encoding. Otherwise, urllib3 will send the body using the standard 18:16:19 content-length form. Defaults to False. 18:16:19 18:16:19 :param int body_pos: 18:16:19 Position to seek to in file-like body in the event of a retry or 18:16:19 redirect. Typically this won't need to be set because urllib3 will 18:16:19 auto-populate the value when needed. 18:16:19 """ 18:16:19 parsed_url = parse_url(url) 18:16:19 destination_scheme = parsed_url.scheme 18:16:19 18:16:19 if headers is None: 18:16:19 headers = self.headers 18:16:19 18:16:19 if not isinstance(retries, Retry): 18:16:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 18:16:19 18:16:19 if release_conn is None: 18:16:19 release_conn = preload_content 18:16:19 18:16:19 # Check host 18:16:19 if assert_same_host and not self.is_same_host(url): 18:16:19 raise HostChangedError(self, url, retries) 18:16:19 18:16:19 # Ensure that the URL we're connecting to is properly encoded 18:16:19 if url.startswith("/"): 18:16:19 url = to_str(_encode_target(url)) 18:16:19 else: 18:16:19 url = to_str(parsed_url.url) 18:16:19 18:16:19 conn = None 18:16:19 18:16:19 # Track whether `conn` needs to be released before 18:16:19 # returning/raising/recursing. Update this variable if necessary, and 18:16:19 # leave `release_conn` constant throughout the function. That way, if 18:16:19 # the function recurses, the original value of `release_conn` will be 18:16:19 # passed down into the recursive call, and its value will be respected. 18:16:19 # 18:16:19 # See issue #651 [1] for details. 18:16:19 # 18:16:19 # [1] 18:16:19 release_this_conn = release_conn 18:16:19 18:16:19 http_tunnel_required = connection_requires_http_tunnel( 18:16:19 self.proxy, self.proxy_config, destination_scheme 18:16:19 ) 18:16:19 18:16:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 18:16:19 # have to copy the headers dict so we can safely change it without those 18:16:19 # changes being reflected in anyone else's copy. 18:16:19 if not http_tunnel_required: 18:16:19 headers = headers.copy() # type: ignore[attr-defined] 18:16:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 18:16:19 18:16:19 # Must keep the exception bound to a separate variable or else Python 3 18:16:19 # complains about UnboundLocalError. 18:16:19 err = None 18:16:19 18:16:19 # Keep track of whether we cleanly exited the except block. This 18:16:19 # ensures we do proper cleanup in finally. 18:16:19 clean_exit = False 18:16:19 18:16:19 # Rewind body position, if needed. Record current position 18:16:19 # for future rewinds in the event of a redirect/retry. 18:16:19 body_pos = set_file_position(body, body_pos) 18:16:19 18:16:19 try: 18:16:19 # Request a connection from the queue. 18:16:19 timeout_obj = self._get_timeout(timeout) 18:16:19 conn = self._get_conn(timeout=pool_timeout) 18:16:19 18:16:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 18:16:19 18:16:19 # Is this a closed/new connection that requires CONNECT tunnelling? 18:16:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 18:16:19 try: 18:16:19 self._prepare_proxy(conn) 18:16:19 except (BaseSSLError, OSError, SocketTimeout) as e: 18:16:19 self._raise_timeout( 18:16:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 18:16:19 ) 18:16:19 raise 18:16:19 18:16:19 # If we're going to release the connection in ``finally:``, then 18:16:19 # the response doesn't need to know about the connection. Otherwise 18:16:19 # it will also try to release it and we'll have a double-release 18:16:19 # mess. 18:16:19 response_conn = conn if not release_conn else None 18:16:19 18:16:19 # Make the request on the HTTPConnection object 18:16:19 > response = self._make_request( 18:16:19 conn, 18:16:19 method, 18:16:19 url, 18:16:19 timeout=timeout_obj, 18:16:19 body=body, 18:16:19 headers=headers, 18:16:19 chunked=chunked, 18:16:19 retries=retries, 18:16:19 response_conn=response_conn, 18:16:19 preload_content=preload_content, 18:16:19 decode_content=decode_content, 18:16:19 **response_kw, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 18:16:19 conn.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 18:16:19 self.endheaders() 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 18:16:19 self._send_output(message_body, encode_chunked=encode_chunked) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 18:16:19 self.send(msg) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 18:16:19 self.connect() 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 18:16:19 self.sock = self._new_conn() 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 except socket.gaierror as e: 18:16:19 raise NameResolutionError(self.host, self, e) from e 18:16:19 except SocketTimeout as e: 18:16:19 raise ConnectTimeoutError( 18:16:19 self, 18:16:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 18:16:19 ) from e 18:16:19 18:16:19 except OSError as e: 18:16:19 > raise NewConnectionError( 18:16:19 self, f"Failed to establish a new connection: {e}" 18:16:19 ) from e 18:16:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 > resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 18:16:19 retries = retries.increment( 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 method = 'GET' 18:16:19 url = '/rests/data/ietf-network:networks/network=openroadm-topology?content=config' 18:16:19 response = None 18:16:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 18:16:19 _pool = 18:16:19 _stacktrace = 18:16:19 18:16:19 def increment( 18:16:19 self, 18:16:19 method: str | None = None, 18:16:19 url: str | None = None, 18:16:19 response: BaseHTTPResponse | None = None, 18:16:19 error: Exception | None = None, 18:16:19 _pool: ConnectionPool | None = None, 18:16:19 _stacktrace: TracebackType | None = None, 18:16:19 ) -> Self: 18:16:19 """Return a new Retry object with incremented retry counters. 18:16:19 18:16:19 :param response: A response object, or None, if the server did not 18:16:19 return a response. 18:16:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 18:16:19 :param Exception error: An error encountered during the request, or 18:16:19 None if the response was received successfully. 18:16:19 18:16:19 :return: A new ``Retry`` object. 18:16:19 """ 18:16:19 if self.total is False and error: 18:16:19 # Disabled, indicate to re-raise the error. 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 18:16:19 total = self.total 18:16:19 if total is not None: 18:16:19 total -= 1 18:16:19 18:16:19 connect = self.connect 18:16:19 read = self.read 18:16:19 redirect = self.redirect 18:16:19 status_count = self.status 18:16:19 other = self.other 18:16:19 cause = "unknown" 18:16:19 status = None 18:16:19 redirect_location = None 18:16:19 18:16:19 if error and self._is_connection_error(error): 18:16:19 # Connect retry? 18:16:19 if connect is False: 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif connect is not None: 18:16:19 connect -= 1 18:16:19 18:16:19 elif error and self._is_read_error(error): 18:16:19 # Read retry? 18:16:19 if read is False or method is None or not self._is_method_retryable(method): 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif read is not None: 18:16:19 read -= 1 18:16:19 18:16:19 elif error: 18:16:19 # Other retry? 18:16:19 if other is not None: 18:16:19 other -= 1 18:16:19 18:16:19 elif response and response.get_redirect_location(): 18:16:19 # Redirect retry? 18:16:19 if redirect is not None: 18:16:19 redirect -= 1 18:16:19 cause = "too many redirects" 18:16:19 response_redirect_location = response.get_redirect_location() 18:16:19 if response_redirect_location: 18:16:19 redirect_location = response_redirect_location 18:16:19 status = response.status 18:16:19 18:16:19 else: 18:16:19 # Incrementing because of a server error like a 500 in 18:16:19 # status_forcelist and the given method is in the allowed_methods 18:16:19 cause = ResponseError.GENERIC_ERROR 18:16:19 if response and response.status: 18:16:19 if status_count is not None: 18:16:19 status_count -= 1 18:16:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 18:16:19 status = response.status 18:16:19 18:16:19 history = self.history + ( 18:16:19 RequestHistory(method, url, error, status, redirect_location), 18:16:19 ) 18:16:19 18:16:19 new_retry = self.new( 18:16:19 total=total, 18:16:19 connect=connect, 18:16:19 read=read, 18:16:19 redirect=redirect, 18:16:19 status=status_count, 18:16:19 other=other, 18:16:19 history=history, 18:16:19 ) 18:16:19 18:16:19 if new_retry.is_exhausted(): 18:16:19 reason = error or ResponseError(cause) 18:16:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 18:16:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-topology?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 18:16:19 18:16:19 During handling of the above exception, another exception occurred: 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_40_getLinks_OpenRoadmTopology(self): 18:16:19 > response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:749: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 transportpce_tests/common/test_utils.py:495: in get_ietf_network_request 18:16:19 response = get_request(url[RESTCONF_VERSION].format(*format_args)) 18:16:19 transportpce_tests/common/test_utils.py:116: in get_request 18:16:19 return requests.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 18:16:19 return session.request(method=method, url=url, **kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 18:16:19 resp = self.send(prep, **send_kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 18:16:19 r = adapter.send(request, **kwargs) 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 except (ProtocolError, OSError) as err: 18:16:19 raise ConnectionError(err, request=request) 18:16:19 18:16:19 except MaxRetryError as e: 18:16:19 if isinstance(e.reason, ConnectTimeoutError): 18:16:19 # TODO: Remove this in 3.0.0: see #2811 18:16:19 if not isinstance(e.reason, NewConnectionError): 18:16:19 raise ConnectTimeout(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, ResponseError): 18:16:19 raise RetryError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _ProxyError): 18:16:19 raise ProxyError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _SSLError): 18:16:19 # This branch is for urllib3 v1.22 and later. 18:16:19 raise SSLError(e, request=request) 18:16:19 18:16:19 > raise ConnectionError(e, request=request) 18:16:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-topology?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 18:16:19 ____________ TransportPCETopologyTesting.test_41_disconnect_ROADMA _____________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 > sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 18:16:19 raise err 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 address = ('localhost', 8182), timeout = 10, source_address = None 18:16:19 socket_options = [(6, 1, 1)] 18:16:19 18:16:19 def create_connection( 18:16:19 address: tuple[str, int], 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 source_address: tuple[str, int] | None = None, 18:16:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 18:16:19 ) -> socket.socket: 18:16:19 """Connect to *address* and return the socket object. 18:16:19 18:16:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 18:16:19 port)``) and return the socket object. Passing the optional 18:16:19 *timeout* parameter will set the timeout on the socket instance 18:16:19 before attempting to connect. If no *timeout* is supplied, the 18:16:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 18:16:19 is used. If *source_address* is set it must be a tuple of (host, port) 18:16:19 for the socket to bind as a source address before making the connection. 18:16:19 An host of '' or port 0 tells the OS to use the default. 18:16:19 """ 18:16:19 18:16:19 host, port = address 18:16:19 if host.startswith("["): 18:16:19 host = host.strip("[]") 18:16:19 err = None 18:16:19 18:16:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 18:16:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 18:16:19 # The original create_connection function always returns all records. 18:16:19 family = allowed_gai_family() 18:16:19 18:16:19 try: 18:16:19 host.encode("idna") 18:16:19 except UnicodeError: 18:16:19 raise LocationParseError(f"'{host}', label empty or too long") from None 18:16:19 18:16:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 18:16:19 af, socktype, proto, canonname, sa = res 18:16:19 sock = None 18:16:19 try: 18:16:19 sock = socket.socket(af, socktype, proto) 18:16:19 18:16:19 # If provided, set socket level options before connecting. 18:16:19 _set_socket_options(sock, socket_options) 18:16:19 18:16:19 if timeout is not _DEFAULT_TIMEOUT: 18:16:19 sock.settimeout(timeout) 18:16:19 if source_address: 18:16:19 sock.bind(source_address) 18:16:19 > sock.connect(sa) 18:16:19 E ConnectionRefusedError: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 method = 'DELETE' 18:16:19 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01' 18:16:19 body = None 18:16:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 18:16:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 redirect = False, assert_same_host = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 18:16:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 18:16:19 decode_content = False, response_kw = {} 18:16:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01', query=None, fragment=None) 18:16:19 destination_scheme = None, conn = None, release_this_conn = True 18:16:19 http_tunnel_required = False, err = None, clean_exit = False 18:16:19 18:16:19 def urlopen( # type: ignore[override] 18:16:19 self, 18:16:19 method: str, 18:16:19 url: str, 18:16:19 body: _TYPE_BODY | None = None, 18:16:19 headers: typing.Mapping[str, str] | None = None, 18:16:19 retries: Retry | bool | int | None = None, 18:16:19 redirect: bool = True, 18:16:19 assert_same_host: bool = True, 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 pool_timeout: int | None = None, 18:16:19 release_conn: bool | None = None, 18:16:19 chunked: bool = False, 18:16:19 body_pos: _TYPE_BODY_POSITION | None = None, 18:16:19 preload_content: bool = True, 18:16:19 decode_content: bool = True, 18:16:19 **response_kw: typing.Any, 18:16:19 ) -> BaseHTTPResponse: 18:16:19 """ 18:16:19 Get a connection from the pool and perform an HTTP request. This is the 18:16:19 lowest level call for making a request, so you'll need to specify all 18:16:19 the raw details. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 More commonly, it's appropriate to use a convenience method 18:16:19 such as :meth:`request`. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 `release_conn` will only behave as expected if 18:16:19 `preload_content=False` because we want to make 18:16:19 `preload_content=False` the default behaviour someday soon without 18:16:19 breaking backwards compatibility. 18:16:19 18:16:19 :param method: 18:16:19 HTTP request method (such as GET, POST, PUT, etc.) 18:16:19 18:16:19 :param url: 18:16:19 The URL to perform the request on. 18:16:19 18:16:19 :param body: 18:16:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 18:16:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 18:16:19 18:16:19 :param headers: 18:16:19 Dictionary of custom headers to send, such as User-Agent, 18:16:19 If-None-Match, etc. If None, pool headers are used. If provided, 18:16:19 these headers completely replace any pool-specific headers. 18:16:19 18:16:19 :param retries: 18:16:19 Configure the number of retries to allow before raising a 18:16:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 18:16:19 18:16:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 18:16:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 18:16:19 over different types of retries. 18:16:19 Pass an integer number to retry connection errors that many times, 18:16:19 but no other types of errors. Pass zero to never retry. 18:16:19 18:16:19 If ``False``, then retries are disabled and any exception is raised 18:16:19 immediately. Also, instead of raising a MaxRetryError on redirects, 18:16:19 the redirect response will be returned. 18:16:19 18:16:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 18:16:19 18:16:19 :param redirect: 18:16:19 If True, automatically handle redirects (status codes 301, 302, 18:16:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 18:16:19 will disable redirect, too. 18:16:19 18:16:19 :param assert_same_host: 18:16:19 If ``True``, will make sure that the host of the pool requests is 18:16:19 consistent else will raise HostChangedError. When ``False``, you can 18:16:19 use the pool on an HTTP proxy and request foreign hosts. 18:16:19 18:16:19 :param timeout: 18:16:19 If specified, overrides the default timeout for this one 18:16:19 request. It may be a float (in seconds) or an instance of 18:16:19 :class:`urllib3.util.Timeout`. 18:16:19 18:16:19 :param pool_timeout: 18:16:19 If set and the pool is set to block=True, then this method will 18:16:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 18:16:19 connection is available within the time period. 18:16:19 18:16:19 :param bool preload_content: 18:16:19 If True, the response's body will be preloaded into memory. 18:16:19 18:16:19 :param bool decode_content: 18:16:19 If True, will attempt to decode the body based on the 18:16:19 'content-encoding' header. 18:16:19 18:16:19 :param release_conn: 18:16:19 If False, then the urlopen call will not release the connection 18:16:19 back into the pool once a response is received (but will release if 18:16:19 you read the entire contents of the response such as when 18:16:19 `preload_content=True`). This is useful if you're not preloading 18:16:19 the response's content immediately. You will need to call 18:16:19 ``r.release_conn()`` on the response ``r`` to return the connection 18:16:19 back into the pool. If None, it takes the value of ``preload_content`` 18:16:19 which defaults to ``True``. 18:16:19 18:16:19 :param bool chunked: 18:16:19 If True, urllib3 will send the body using chunked transfer 18:16:19 encoding. Otherwise, urllib3 will send the body using the standard 18:16:19 content-length form. Defaults to False. 18:16:19 18:16:19 :param int body_pos: 18:16:19 Position to seek to in file-like body in the event of a retry or 18:16:19 redirect. Typically this won't need to be set because urllib3 will 18:16:19 auto-populate the value when needed. 18:16:19 """ 18:16:19 parsed_url = parse_url(url) 18:16:19 destination_scheme = parsed_url.scheme 18:16:19 18:16:19 if headers is None: 18:16:19 headers = self.headers 18:16:19 18:16:19 if not isinstance(retries, Retry): 18:16:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 18:16:19 18:16:19 if release_conn is None: 18:16:19 release_conn = preload_content 18:16:19 18:16:19 # Check host 18:16:19 if assert_same_host and not self.is_same_host(url): 18:16:19 raise HostChangedError(self, url, retries) 18:16:19 18:16:19 # Ensure that the URL we're connecting to is properly encoded 18:16:19 if url.startswith("/"): 18:16:19 url = to_str(_encode_target(url)) 18:16:19 else: 18:16:19 url = to_str(parsed_url.url) 18:16:19 18:16:19 conn = None 18:16:19 18:16:19 # Track whether `conn` needs to be released before 18:16:19 # returning/raising/recursing. Update this variable if necessary, and 18:16:19 # leave `release_conn` constant throughout the function. That way, if 18:16:19 # the function recurses, the original value of `release_conn` will be 18:16:19 # passed down into the recursive call, and its value will be respected. 18:16:19 # 18:16:19 # See issue #651 [1] for details. 18:16:19 # 18:16:19 # [1] 18:16:19 release_this_conn = release_conn 18:16:19 18:16:19 http_tunnel_required = connection_requires_http_tunnel( 18:16:19 self.proxy, self.proxy_config, destination_scheme 18:16:19 ) 18:16:19 18:16:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 18:16:19 # have to copy the headers dict so we can safely change it without those 18:16:19 # changes being reflected in anyone else's copy. 18:16:19 if not http_tunnel_required: 18:16:19 headers = headers.copy() # type: ignore[attr-defined] 18:16:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 18:16:19 18:16:19 # Must keep the exception bound to a separate variable or else Python 3 18:16:19 # complains about UnboundLocalError. 18:16:19 err = None 18:16:19 18:16:19 # Keep track of whether we cleanly exited the except block. This 18:16:19 # ensures we do proper cleanup in finally. 18:16:19 clean_exit = False 18:16:19 18:16:19 # Rewind body position, if needed. Record current position 18:16:19 # for future rewinds in the event of a redirect/retry. 18:16:19 body_pos = set_file_position(body, body_pos) 18:16:19 18:16:19 try: 18:16:19 # Request a connection from the queue. 18:16:19 timeout_obj = self._get_timeout(timeout) 18:16:19 conn = self._get_conn(timeout=pool_timeout) 18:16:19 18:16:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 18:16:19 18:16:19 # Is this a closed/new connection that requires CONNECT tunnelling? 18:16:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 18:16:19 try: 18:16:19 self._prepare_proxy(conn) 18:16:19 except (BaseSSLError, OSError, SocketTimeout) as e: 18:16:19 self._raise_timeout( 18:16:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 18:16:19 ) 18:16:19 raise 18:16:19 18:16:19 # If we're going to release the connection in ``finally:``, then 18:16:19 # the response doesn't need to know about the connection. Otherwise 18:16:19 # it will also try to release it and we'll have a double-release 18:16:19 # mess. 18:16:19 response_conn = conn if not release_conn else None 18:16:19 18:16:19 # Make the request on the HTTPConnection object 18:16:19 > response = self._make_request( 18:16:19 conn, 18:16:19 method, 18:16:19 url, 18:16:19 timeout=timeout_obj, 18:16:19 body=body, 18:16:19 headers=headers, 18:16:19 chunked=chunked, 18:16:19 retries=retries, 18:16:19 response_conn=response_conn, 18:16:19 preload_content=preload_content, 18:16:19 decode_content=decode_content, 18:16:19 **response_kw, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 18:16:19 conn.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 18:16:19 self.endheaders() 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 18:16:19 self._send_output(message_body, encode_chunked=encode_chunked) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 18:16:19 self.send(msg) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 18:16:19 self.connect() 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 18:16:19 self.sock = self._new_conn() 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 except socket.gaierror as e: 18:16:19 raise NameResolutionError(self.host, self, e) from e 18:16:19 except SocketTimeout as e: 18:16:19 raise ConnectTimeoutError( 18:16:19 self, 18:16:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 18:16:19 ) from e 18:16:19 18:16:19 except OSError as e: 18:16:19 > raise NewConnectionError( 18:16:19 self, f"Failed to establish a new connection: {e}" 18:16:19 ) from e 18:16:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 > resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 18:16:19 retries = retries.increment( 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 method = 'DELETE' 18:16:19 url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01' 18:16:19 response = None 18:16:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 18:16:19 _pool = 18:16:19 _stacktrace = 18:16:19 18:16:19 def increment( 18:16:19 self, 18:16:19 method: str | None = None, 18:16:19 url: str | None = None, 18:16:19 response: BaseHTTPResponse | None = None, 18:16:19 error: Exception | None = None, 18:16:19 _pool: ConnectionPool | None = None, 18:16:19 _stacktrace: TracebackType | None = None, 18:16:19 ) -> Self: 18:16:19 """Return a new Retry object with incremented retry counters. 18:16:19 18:16:19 :param response: A response object, or None, if the server did not 18:16:19 return a response. 18:16:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 18:16:19 :param Exception error: An error encountered during the request, or 18:16:19 None if the response was received successfully. 18:16:19 18:16:19 :return: A new ``Retry`` object. 18:16:19 """ 18:16:19 if self.total is False and error: 18:16:19 # Disabled, indicate to re-raise the error. 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 18:16:19 total = self.total 18:16:19 if total is not None: 18:16:19 total -= 1 18:16:19 18:16:19 connect = self.connect 18:16:19 read = self.read 18:16:19 redirect = self.redirect 18:16:19 status_count = self.status 18:16:19 other = self.other 18:16:19 cause = "unknown" 18:16:19 status = None 18:16:19 redirect_location = None 18:16:19 18:16:19 if error and self._is_connection_error(error): 18:16:19 # Connect retry? 18:16:19 if connect is False: 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif connect is not None: 18:16:19 connect -= 1 18:16:19 18:16:19 elif error and self._is_read_error(error): 18:16:19 # Read retry? 18:16:19 if read is False or method is None or not self._is_method_retryable(method): 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif read is not None: 18:16:19 read -= 1 18:16:19 18:16:19 elif error: 18:16:19 # Other retry? 18:16:19 if other is not None: 18:16:19 other -= 1 18:16:19 18:16:19 elif response and response.get_redirect_location(): 18:16:19 # Redirect retry? 18:16:19 if redirect is not None: 18:16:19 redirect -= 1 18:16:19 cause = "too many redirects" 18:16:19 response_redirect_location = response.get_redirect_location() 18:16:19 if response_redirect_location: 18:16:19 redirect_location = response_redirect_location 18:16:19 status = response.status 18:16:19 18:16:19 else: 18:16:19 # Incrementing because of a server error like a 500 in 18:16:19 # status_forcelist and the given method is in the allowed_methods 18:16:19 cause = ResponseError.GENERIC_ERROR 18:16:19 if response and response.status: 18:16:19 if status_count is not None: 18:16:19 status_count -= 1 18:16:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 18:16:19 status = response.status 18:16:19 18:16:19 history = self.history + ( 18:16:19 RequestHistory(method, url, error, status, redirect_location), 18:16:19 ) 18:16:19 18:16:19 new_retry = self.new( 18:16:19 total=total, 18:16:19 connect=connect, 18:16:19 read=read, 18:16:19 redirect=redirect, 18:16:19 status=status_count, 18:16:19 other=other, 18:16:19 history=history, 18:16:19 ) 18:16:19 18:16:19 if new_retry.is_exhausted(): 18:16:19 reason = error or ResponseError(cause) 18:16:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 18:16:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 18:16:19 18:16:19 During handling of the above exception, another exception occurred: 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_41_disconnect_ROADMA(self): 18:16:19 > response = test_utils.unmount_device("ROADMA01") 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:781: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 transportpce_tests/common/test_utils.py:358: in unmount_device 18:16:19 response = delete_request(url[RESTCONF_VERSION].format('{}', node)) 18:16:19 transportpce_tests/common/test_utils.py:133: in delete_request 18:16:19 return requests.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 18:16:19 return session.request(method=method, url=url, **kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 18:16:19 resp = self.send(prep, **send_kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 18:16:19 r = adapter.send(request, **kwargs) 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 except (ProtocolError, OSError) as err: 18:16:19 raise ConnectionError(err, request=request) 18:16:19 18:16:19 except MaxRetryError as e: 18:16:19 if isinstance(e.reason, ConnectTimeoutError): 18:16:19 # TODO: Remove this in 3.0.0: see #2811 18:16:19 if not isinstance(e.reason, NewConnectionError): 18:16:19 raise ConnectTimeout(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, ResponseError): 18:16:19 raise RetryError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _ProxyError): 18:16:19 raise ProxyError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _SSLError): 18:16:19 # This branch is for urllib3 v1.22 and later. 18:16:19 raise SSLError(e, request=request) 18:16:19 18:16:19 > raise ConnectionError(e, request=request) 18:16:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 18:16:19 ______________ TransportPCETopologyTesting.test_42_getClliNetwork ______________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 > sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 18:16:19 raise err 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 address = ('localhost', 8182), timeout = 10, source_address = None 18:16:19 socket_options = [(6, 1, 1)] 18:16:19 18:16:19 def create_connection( 18:16:19 address: tuple[str, int], 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 source_address: tuple[str, int] | None = None, 18:16:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 18:16:19 ) -> socket.socket: 18:16:19 """Connect to *address* and return the socket object. 18:16:19 18:16:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 18:16:19 port)``) and return the socket object. Passing the optional 18:16:19 *timeout* parameter will set the timeout on the socket instance 18:16:19 before attempting to connect. If no *timeout* is supplied, the 18:16:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 18:16:19 is used. If *source_address* is set it must be a tuple of (host, port) 18:16:19 for the socket to bind as a source address before making the connection. 18:16:19 An host of '' or port 0 tells the OS to use the default. 18:16:19 """ 18:16:19 18:16:19 host, port = address 18:16:19 if host.startswith("["): 18:16:19 host = host.strip("[]") 18:16:19 err = None 18:16:19 18:16:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 18:16:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 18:16:19 # The original create_connection function always returns all records. 18:16:19 family = allowed_gai_family() 18:16:19 18:16:19 try: 18:16:19 host.encode("idna") 18:16:19 except UnicodeError: 18:16:19 raise LocationParseError(f"'{host}', label empty or too long") from None 18:16:19 18:16:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 18:16:19 af, socktype, proto, canonname, sa = res 18:16:19 sock = None 18:16:19 try: 18:16:19 sock = socket.socket(af, socktype, proto) 18:16:19 18:16:19 # If provided, set socket level options before connecting. 18:16:19 _set_socket_options(sock, socket_options) 18:16:19 18:16:19 if timeout is not _DEFAULT_TIMEOUT: 18:16:19 sock.settimeout(timeout) 18:16:19 if source_address: 18:16:19 sock.bind(source_address) 18:16:19 > sock.connect(sa) 18:16:19 E ConnectionRefusedError: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 method = 'GET' 18:16:19 url = '/rests/data/ietf-network:networks/network=clli-network?content=config' 18:16:19 body = None 18:16:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 18:16:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 redirect = False, assert_same_host = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 18:16:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 18:16:19 decode_content = False, response_kw = {} 18:16:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/ietf-network:networks/network=clli-network', query='content=config', fragment=None) 18:16:19 destination_scheme = None, conn = None, release_this_conn = True 18:16:19 http_tunnel_required = False, err = None, clean_exit = False 18:16:19 18:16:19 def urlopen( # type: ignore[override] 18:16:19 self, 18:16:19 method: str, 18:16:19 url: str, 18:16:19 body: _TYPE_BODY | None = None, 18:16:19 headers: typing.Mapping[str, str] | None = None, 18:16:19 retries: Retry | bool | int | None = None, 18:16:19 redirect: bool = True, 18:16:19 assert_same_host: bool = True, 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 pool_timeout: int | None = None, 18:16:19 release_conn: bool | None = None, 18:16:19 chunked: bool = False, 18:16:19 body_pos: _TYPE_BODY_POSITION | None = None, 18:16:19 preload_content: bool = True, 18:16:19 decode_content: bool = True, 18:16:19 **response_kw: typing.Any, 18:16:19 ) -> BaseHTTPResponse: 18:16:19 """ 18:16:19 Get a connection from the pool and perform an HTTP request. This is the 18:16:19 lowest level call for making a request, so you'll need to specify all 18:16:19 the raw details. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 More commonly, it's appropriate to use a convenience method 18:16:19 such as :meth:`request`. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 `release_conn` will only behave as expected if 18:16:19 `preload_content=False` because we want to make 18:16:19 `preload_content=False` the default behaviour someday soon without 18:16:19 breaking backwards compatibility. 18:16:19 18:16:19 :param method: 18:16:19 HTTP request method (such as GET, POST, PUT, etc.) 18:16:19 18:16:19 :param url: 18:16:19 The URL to perform the request on. 18:16:19 18:16:19 :param body: 18:16:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 18:16:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 18:16:19 18:16:19 :param headers: 18:16:19 Dictionary of custom headers to send, such as User-Agent, 18:16:19 If-None-Match, etc. If None, pool headers are used. If provided, 18:16:19 these headers completely replace any pool-specific headers. 18:16:19 18:16:19 :param retries: 18:16:19 Configure the number of retries to allow before raising a 18:16:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 18:16:19 18:16:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 18:16:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 18:16:19 over different types of retries. 18:16:19 Pass an integer number to retry connection errors that many times, 18:16:19 but no other types of errors. Pass zero to never retry. 18:16:19 18:16:19 If ``False``, then retries are disabled and any exception is raised 18:16:19 immediately. Also, instead of raising a MaxRetryError on redirects, 18:16:19 the redirect response will be returned. 18:16:19 18:16:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 18:16:19 18:16:19 :param redirect: 18:16:19 If True, automatically handle redirects (status codes 301, 302, 18:16:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 18:16:19 will disable redirect, too. 18:16:19 18:16:19 :param assert_same_host: 18:16:19 If ``True``, will make sure that the host of the pool requests is 18:16:19 consistent else will raise HostChangedError. When ``False``, you can 18:16:19 use the pool on an HTTP proxy and request foreign hosts. 18:16:19 18:16:19 :param timeout: 18:16:19 If specified, overrides the default timeout for this one 18:16:19 request. It may be a float (in seconds) or an instance of 18:16:19 :class:`urllib3.util.Timeout`. 18:16:19 18:16:19 :param pool_timeout: 18:16:19 If set and the pool is set to block=True, then this method will 18:16:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 18:16:19 connection is available within the time period. 18:16:19 18:16:19 :param bool preload_content: 18:16:19 If True, the response's body will be preloaded into memory. 18:16:19 18:16:19 :param bool decode_content: 18:16:19 If True, will attempt to decode the body based on the 18:16:19 'content-encoding' header. 18:16:19 18:16:19 :param release_conn: 18:16:19 If False, then the urlopen call will not release the connection 18:16:19 back into the pool once a response is received (but will release if 18:16:19 you read the entire contents of the response such as when 18:16:19 `preload_content=True`). This is useful if you're not preloading 18:16:19 the response's content immediately. You will need to call 18:16:19 ``r.release_conn()`` on the response ``r`` to return the connection 18:16:19 back into the pool. If None, it takes the value of ``preload_content`` 18:16:19 which defaults to ``True``. 18:16:19 18:16:19 :param bool chunked: 18:16:19 If True, urllib3 will send the body using chunked transfer 18:16:19 encoding. Otherwise, urllib3 will send the body using the standard 18:16:19 content-length form. Defaults to False. 18:16:19 18:16:19 :param int body_pos: 18:16:19 Position to seek to in file-like body in the event of a retry or 18:16:19 redirect. Typically this won't need to be set because urllib3 will 18:16:19 auto-populate the value when needed. 18:16:19 """ 18:16:19 parsed_url = parse_url(url) 18:16:19 destination_scheme = parsed_url.scheme 18:16:19 18:16:19 if headers is None: 18:16:19 headers = self.headers 18:16:19 18:16:19 if not isinstance(retries, Retry): 18:16:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 18:16:19 18:16:19 if release_conn is None: 18:16:19 release_conn = preload_content 18:16:19 18:16:19 # Check host 18:16:19 if assert_same_host and not self.is_same_host(url): 18:16:19 raise HostChangedError(self, url, retries) 18:16:19 18:16:19 # Ensure that the URL we're connecting to is properly encoded 18:16:19 if url.startswith("/"): 18:16:19 url = to_str(_encode_target(url)) 18:16:19 else: 18:16:19 url = to_str(parsed_url.url) 18:16:19 18:16:19 conn = None 18:16:19 18:16:19 # Track whether `conn` needs to be released before 18:16:19 # returning/raising/recursing. Update this variable if necessary, and 18:16:19 # leave `release_conn` constant throughout the function. That way, if 18:16:19 # the function recurses, the original value of `release_conn` will be 18:16:19 # passed down into the recursive call, and its value will be respected. 18:16:19 # 18:16:19 # See issue #651 [1] for details. 18:16:19 # 18:16:19 # [1] 18:16:19 release_this_conn = release_conn 18:16:19 18:16:19 http_tunnel_required = connection_requires_http_tunnel( 18:16:19 self.proxy, self.proxy_config, destination_scheme 18:16:19 ) 18:16:19 18:16:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 18:16:19 # have to copy the headers dict so we can safely change it without those 18:16:19 # changes being reflected in anyone else's copy. 18:16:19 if not http_tunnel_required: 18:16:19 headers = headers.copy() # type: ignore[attr-defined] 18:16:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 18:16:19 18:16:19 # Must keep the exception bound to a separate variable or else Python 3 18:16:19 # complains about UnboundLocalError. 18:16:19 err = None 18:16:19 18:16:19 # Keep track of whether we cleanly exited the except block. This 18:16:19 # ensures we do proper cleanup in finally. 18:16:19 clean_exit = False 18:16:19 18:16:19 # Rewind body position, if needed. Record current position 18:16:19 # for future rewinds in the event of a redirect/retry. 18:16:19 body_pos = set_file_position(body, body_pos) 18:16:19 18:16:19 try: 18:16:19 # Request a connection from the queue. 18:16:19 timeout_obj = self._get_timeout(timeout) 18:16:19 conn = self._get_conn(timeout=pool_timeout) 18:16:19 18:16:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 18:16:19 18:16:19 # Is this a closed/new connection that requires CONNECT tunnelling? 18:16:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 18:16:19 try: 18:16:19 self._prepare_proxy(conn) 18:16:19 except (BaseSSLError, OSError, SocketTimeout) as e: 18:16:19 self._raise_timeout( 18:16:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 18:16:19 ) 18:16:19 raise 18:16:19 18:16:19 # If we're going to release the connection in ``finally:``, then 18:16:19 # the response doesn't need to know about the connection. Otherwise 18:16:19 # it will also try to release it and we'll have a double-release 18:16:19 # mess. 18:16:19 response_conn = conn if not release_conn else None 18:16:19 18:16:19 # Make the request on the HTTPConnection object 18:16:19 > response = self._make_request( 18:16:19 conn, 18:16:19 method, 18:16:19 url, 18:16:19 timeout=timeout_obj, 18:16:19 body=body, 18:16:19 headers=headers, 18:16:19 chunked=chunked, 18:16:19 retries=retries, 18:16:19 response_conn=response_conn, 18:16:19 preload_content=preload_content, 18:16:19 decode_content=decode_content, 18:16:19 **response_kw, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 18:16:19 conn.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 18:16:19 self.endheaders() 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 18:16:19 self._send_output(message_body, encode_chunked=encode_chunked) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 18:16:19 self.send(msg) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 18:16:19 self.connect() 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 18:16:19 self.sock = self._new_conn() 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 except socket.gaierror as e: 18:16:19 raise NameResolutionError(self.host, self, e) from e 18:16:19 except SocketTimeout as e: 18:16:19 raise ConnectTimeoutError( 18:16:19 self, 18:16:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 18:16:19 ) from e 18:16:19 18:16:19 except OSError as e: 18:16:19 > raise NewConnectionError( 18:16:19 self, f"Failed to establish a new connection: {e}" 18:16:19 ) from e 18:16:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 > resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 18:16:19 retries = retries.increment( 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 method = 'GET' 18:16:19 url = '/rests/data/ietf-network:networks/network=clli-network?content=config' 18:16:19 response = None 18:16:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 18:16:19 _pool = 18:16:19 _stacktrace = 18:16:19 18:16:19 def increment( 18:16:19 self, 18:16:19 method: str | None = None, 18:16:19 url: str | None = None, 18:16:19 response: BaseHTTPResponse | None = None, 18:16:19 error: Exception | None = None, 18:16:19 _pool: ConnectionPool | None = None, 18:16:19 _stacktrace: TracebackType | None = None, 18:16:19 ) -> Self: 18:16:19 """Return a new Retry object with incremented retry counters. 18:16:19 18:16:19 :param response: A response object, or None, if the server did not 18:16:19 return a response. 18:16:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 18:16:19 :param Exception error: An error encountered during the request, or 18:16:19 None if the response was received successfully. 18:16:19 18:16:19 :return: A new ``Retry`` object. 18:16:19 """ 18:16:19 if self.total is False and error: 18:16:19 # Disabled, indicate to re-raise the error. 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 18:16:19 total = self.total 18:16:19 if total is not None: 18:16:19 total -= 1 18:16:19 18:16:19 connect = self.connect 18:16:19 read = self.read 18:16:19 redirect = self.redirect 18:16:19 status_count = self.status 18:16:19 other = self.other 18:16:19 cause = "unknown" 18:16:19 status = None 18:16:19 redirect_location = None 18:16:19 18:16:19 if error and self._is_connection_error(error): 18:16:19 # Connect retry? 18:16:19 if connect is False: 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif connect is not None: 18:16:19 connect -= 1 18:16:19 18:16:19 elif error and self._is_read_error(error): 18:16:19 # Read retry? 18:16:19 if read is False or method is None or not self._is_method_retryable(method): 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif read is not None: 18:16:19 read -= 1 18:16:19 18:16:19 elif error: 18:16:19 # Other retry? 18:16:19 if other is not None: 18:16:19 other -= 1 18:16:19 18:16:19 elif response and response.get_redirect_location(): 18:16:19 # Redirect retry? 18:16:19 if redirect is not None: 18:16:19 redirect -= 1 18:16:19 cause = "too many redirects" 18:16:19 response_redirect_location = response.get_redirect_location() 18:16:19 if response_redirect_location: 18:16:19 redirect_location = response_redirect_location 18:16:19 status = response.status 18:16:19 18:16:19 else: 18:16:19 # Incrementing because of a server error like a 500 in 18:16:19 # status_forcelist and the given method is in the allowed_methods 18:16:19 cause = ResponseError.GENERIC_ERROR 18:16:19 if response and response.status: 18:16:19 if status_count is not None: 18:16:19 status_count -= 1 18:16:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 18:16:19 status = response.status 18:16:19 18:16:19 history = self.history + ( 18:16:19 RequestHistory(method, url, error, status, redirect_location), 18:16:19 ) 18:16:19 18:16:19 new_retry = self.new( 18:16:19 total=total, 18:16:19 connect=connect, 18:16:19 read=read, 18:16:19 redirect=redirect, 18:16:19 status=status_count, 18:16:19 other=other, 18:16:19 history=history, 18:16:19 ) 18:16:19 18:16:19 if new_retry.is_exhausted(): 18:16:19 reason = error or ResponseError(cause) 18:16:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 18:16:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=clli-network?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 18:16:19 18:16:19 During handling of the above exception, another exception occurred: 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_42_getClliNetwork(self): 18:16:19 > response = test_utils.get_ietf_network_request('clli-network', 'config') 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:788: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 transportpce_tests/common/test_utils.py:495: in get_ietf_network_request 18:16:19 response = get_request(url[RESTCONF_VERSION].format(*format_args)) 18:16:19 transportpce_tests/common/test_utils.py:116: in get_request 18:16:19 return requests.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 18:16:19 return session.request(method=method, url=url, **kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 18:16:19 resp = self.send(prep, **send_kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 18:16:19 r = adapter.send(request, **kwargs) 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 except (ProtocolError, OSError) as err: 18:16:19 raise ConnectionError(err, request=request) 18:16:19 18:16:19 except MaxRetryError as e: 18:16:19 if isinstance(e.reason, ConnectTimeoutError): 18:16:19 # TODO: Remove this in 3.0.0: see #2811 18:16:19 if not isinstance(e.reason, NewConnectionError): 18:16:19 raise ConnectTimeout(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, ResponseError): 18:16:19 raise RetryError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _ProxyError): 18:16:19 raise ProxyError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _SSLError): 18:16:19 # This branch is for urllib3 v1.22 and later. 18:16:19 raise SSLError(e, request=request) 18:16:19 18:16:19 > raise ConnectionError(e, request=request) 18:16:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=clli-network?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 18:16:19 ___________ TransportPCETopologyTesting.test_43_getOpenRoadmNetwork ____________ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 > sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 18:16:19 raise err 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 address = ('localhost', 8182), timeout = 10, source_address = None 18:16:19 socket_options = [(6, 1, 1)] 18:16:19 18:16:19 def create_connection( 18:16:19 address: tuple[str, int], 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 source_address: tuple[str, int] | None = None, 18:16:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 18:16:19 ) -> socket.socket: 18:16:19 """Connect to *address* and return the socket object. 18:16:19 18:16:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 18:16:19 port)``) and return the socket object. Passing the optional 18:16:19 *timeout* parameter will set the timeout on the socket instance 18:16:19 before attempting to connect. If no *timeout* is supplied, the 18:16:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 18:16:19 is used. If *source_address* is set it must be a tuple of (host, port) 18:16:19 for the socket to bind as a source address before making the connection. 18:16:19 An host of '' or port 0 tells the OS to use the default. 18:16:19 """ 18:16:19 18:16:19 host, port = address 18:16:19 if host.startswith("["): 18:16:19 host = host.strip("[]") 18:16:19 err = None 18:16:19 18:16:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 18:16:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 18:16:19 # The original create_connection function always returns all records. 18:16:19 family = allowed_gai_family() 18:16:19 18:16:19 try: 18:16:19 host.encode("idna") 18:16:19 except UnicodeError: 18:16:19 raise LocationParseError(f"'{host}', label empty or too long") from None 18:16:19 18:16:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 18:16:19 af, socktype, proto, canonname, sa = res 18:16:19 sock = None 18:16:19 try: 18:16:19 sock = socket.socket(af, socktype, proto) 18:16:19 18:16:19 # If provided, set socket level options before connecting. 18:16:19 _set_socket_options(sock, socket_options) 18:16:19 18:16:19 if timeout is not _DEFAULT_TIMEOUT: 18:16:19 sock.settimeout(timeout) 18:16:19 if source_address: 18:16:19 sock.bind(source_address) 18:16:19 > sock.connect(sa) 18:16:19 E ConnectionRefusedError: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 method = 'GET' 18:16:19 url = '/rests/data/ietf-network:networks/network=openroadm-network?content=config' 18:16:19 body = None 18:16:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 18:16:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 redirect = False, assert_same_host = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 18:16:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 18:16:19 decode_content = False, response_kw = {} 18:16:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/ietf-network:networks/network=openroadm-network', query='content=config', fragment=None) 18:16:19 destination_scheme = None, conn = None, release_this_conn = True 18:16:19 http_tunnel_required = False, err = None, clean_exit = False 18:16:19 18:16:19 def urlopen( # type: ignore[override] 18:16:19 self, 18:16:19 method: str, 18:16:19 url: str, 18:16:19 body: _TYPE_BODY | None = None, 18:16:19 headers: typing.Mapping[str, str] | None = None, 18:16:19 retries: Retry | bool | int | None = None, 18:16:19 redirect: bool = True, 18:16:19 assert_same_host: bool = True, 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 pool_timeout: int | None = None, 18:16:19 release_conn: bool | None = None, 18:16:19 chunked: bool = False, 18:16:19 body_pos: _TYPE_BODY_POSITION | None = None, 18:16:19 preload_content: bool = True, 18:16:19 decode_content: bool = True, 18:16:19 **response_kw: typing.Any, 18:16:19 ) -> BaseHTTPResponse: 18:16:19 """ 18:16:19 Get a connection from the pool and perform an HTTP request. This is the 18:16:19 lowest level call for making a request, so you'll need to specify all 18:16:19 the raw details. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 More commonly, it's appropriate to use a convenience method 18:16:19 such as :meth:`request`. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 `release_conn` will only behave as expected if 18:16:19 `preload_content=False` because we want to make 18:16:19 `preload_content=False` the default behaviour someday soon without 18:16:19 breaking backwards compatibility. 18:16:19 18:16:19 :param method: 18:16:19 HTTP request method (such as GET, POST, PUT, etc.) 18:16:19 18:16:19 :param url: 18:16:19 The URL to perform the request on. 18:16:19 18:16:19 :param body: 18:16:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 18:16:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 18:16:19 18:16:19 :param headers: 18:16:19 Dictionary of custom headers to send, such as User-Agent, 18:16:19 If-None-Match, etc. If None, pool headers are used. If provided, 18:16:19 these headers completely replace any pool-specific headers. 18:16:19 18:16:19 :param retries: 18:16:19 Configure the number of retries to allow before raising a 18:16:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 18:16:19 18:16:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 18:16:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 18:16:19 over different types of retries. 18:16:19 Pass an integer number to retry connection errors that many times, 18:16:19 but no other types of errors. Pass zero to never retry. 18:16:19 18:16:19 If ``False``, then retries are disabled and any exception is raised 18:16:19 immediately. Also, instead of raising a MaxRetryError on redirects, 18:16:19 the redirect response will be returned. 18:16:19 18:16:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 18:16:19 18:16:19 :param redirect: 18:16:19 If True, automatically handle redirects (status codes 301, 302, 18:16:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 18:16:19 will disable redirect, too. 18:16:19 18:16:19 :param assert_same_host: 18:16:19 If ``True``, will make sure that the host of the pool requests is 18:16:19 consistent else will raise HostChangedError. When ``False``, you can 18:16:19 use the pool on an HTTP proxy and request foreign hosts. 18:16:19 18:16:19 :param timeout: 18:16:19 If specified, overrides the default timeout for this one 18:16:19 request. It may be a float (in seconds) or an instance of 18:16:19 :class:`urllib3.util.Timeout`. 18:16:19 18:16:19 :param pool_timeout: 18:16:19 If set and the pool is set to block=True, then this method will 18:16:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 18:16:19 connection is available within the time period. 18:16:19 18:16:19 :param bool preload_content: 18:16:19 If True, the response's body will be preloaded into memory. 18:16:19 18:16:19 :param bool decode_content: 18:16:19 If True, will attempt to decode the body based on the 18:16:19 'content-encoding' header. 18:16:19 18:16:19 :param release_conn: 18:16:19 If False, then the urlopen call will not release the connection 18:16:19 back into the pool once a response is received (but will release if 18:16:19 you read the entire contents of the response such as when 18:16:19 `preload_content=True`). This is useful if you're not preloading 18:16:19 the response's content immediately. You will need to call 18:16:19 ``r.release_conn()`` on the response ``r`` to return the connection 18:16:19 back into the pool. If None, it takes the value of ``preload_content`` 18:16:19 which defaults to ``True``. 18:16:19 18:16:19 :param bool chunked: 18:16:19 If True, urllib3 will send the body using chunked transfer 18:16:19 encoding. Otherwise, urllib3 will send the body using the standard 18:16:19 content-length form. Defaults to False. 18:16:19 18:16:19 :param int body_pos: 18:16:19 Position to seek to in file-like body in the event of a retry or 18:16:19 redirect. Typically this won't need to be set because urllib3 will 18:16:19 auto-populate the value when needed. 18:16:19 """ 18:16:19 parsed_url = parse_url(url) 18:16:19 destination_scheme = parsed_url.scheme 18:16:19 18:16:19 if headers is None: 18:16:19 headers = self.headers 18:16:19 18:16:19 if not isinstance(retries, Retry): 18:16:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 18:16:19 18:16:19 if release_conn is None: 18:16:19 release_conn = preload_content 18:16:19 18:16:19 # Check host 18:16:19 if assert_same_host and not self.is_same_host(url): 18:16:19 raise HostChangedError(self, url, retries) 18:16:19 18:16:19 # Ensure that the URL we're connecting to is properly encoded 18:16:19 if url.startswith("/"): 18:16:19 url = to_str(_encode_target(url)) 18:16:19 else: 18:16:19 url = to_str(parsed_url.url) 18:16:19 18:16:19 conn = None 18:16:19 18:16:19 # Track whether `conn` needs to be released before 18:16:19 # returning/raising/recursing. Update this variable if necessary, and 18:16:19 # leave `release_conn` constant throughout the function. That way, if 18:16:19 # the function recurses, the original value of `release_conn` will be 18:16:19 # passed down into the recursive call, and its value will be respected. 18:16:19 # 18:16:19 # See issue #651 [1] for details. 18:16:19 # 18:16:19 # [1] 18:16:19 release_this_conn = release_conn 18:16:19 18:16:19 http_tunnel_required = connection_requires_http_tunnel( 18:16:19 self.proxy, self.proxy_config, destination_scheme 18:16:19 ) 18:16:19 18:16:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 18:16:19 # have to copy the headers dict so we can safely change it without those 18:16:19 # changes being reflected in anyone else's copy. 18:16:19 if not http_tunnel_required: 18:16:19 headers = headers.copy() # type: ignore[attr-defined] 18:16:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 18:16:19 18:16:19 # Must keep the exception bound to a separate variable or else Python 3 18:16:19 # complains about UnboundLocalError. 18:16:19 err = None 18:16:19 18:16:19 # Keep track of whether we cleanly exited the except block. This 18:16:19 # ensures we do proper cleanup in finally. 18:16:19 clean_exit = False 18:16:19 18:16:19 # Rewind body position, if needed. Record current position 18:16:19 # for future rewinds in the event of a redirect/retry. 18:16:19 body_pos = set_file_position(body, body_pos) 18:16:19 18:16:19 try: 18:16:19 # Request a connection from the queue. 18:16:19 timeout_obj = self._get_timeout(timeout) 18:16:19 conn = self._get_conn(timeout=pool_timeout) 18:16:19 18:16:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 18:16:19 18:16:19 # Is this a closed/new connection that requires CONNECT tunnelling? 18:16:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 18:16:19 try: 18:16:19 self._prepare_proxy(conn) 18:16:19 except (BaseSSLError, OSError, SocketTimeout) as e: 18:16:19 self._raise_timeout( 18:16:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 18:16:19 ) 18:16:19 raise 18:16:19 18:16:19 # If we're going to release the connection in ``finally:``, then 18:16:19 # the response doesn't need to know about the connection. Otherwise 18:16:19 # it will also try to release it and we'll have a double-release 18:16:19 # mess. 18:16:19 response_conn = conn if not release_conn else None 18:16:19 18:16:19 # Make the request on the HTTPConnection object 18:16:19 > response = self._make_request( 18:16:19 conn, 18:16:19 method, 18:16:19 url, 18:16:19 timeout=timeout_obj, 18:16:19 body=body, 18:16:19 headers=headers, 18:16:19 chunked=chunked, 18:16:19 retries=retries, 18:16:19 response_conn=response_conn, 18:16:19 preload_content=preload_content, 18:16:19 decode_content=decode_content, 18:16:19 **response_kw, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 18:16:19 conn.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 18:16:19 self.endheaders() 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 18:16:19 self._send_output(message_body, encode_chunked=encode_chunked) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 18:16:19 self.send(msg) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 18:16:19 self.connect() 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 18:16:19 self.sock = self._new_conn() 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 except socket.gaierror as e: 18:16:19 raise NameResolutionError(self.host, self, e) from e 18:16:19 except SocketTimeout as e: 18:16:19 raise ConnectTimeoutError( 18:16:19 self, 18:16:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 18:16:19 ) from e 18:16:19 18:16:19 except OSError as e: 18:16:19 > raise NewConnectionError( 18:16:19 self, f"Failed to establish a new connection: {e}" 18:16:19 ) from e 18:16:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 > resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 18:16:19 retries = retries.increment( 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 method = 'GET' 18:16:19 url = '/rests/data/ietf-network:networks/network=openroadm-network?content=config' 18:16:19 response = None 18:16:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 18:16:19 _pool = 18:16:19 _stacktrace = 18:16:19 18:16:19 def increment( 18:16:19 self, 18:16:19 method: str | None = None, 18:16:19 url: str | None = None, 18:16:19 response: BaseHTTPResponse | None = None, 18:16:19 error: Exception | None = None, 18:16:19 _pool: ConnectionPool | None = None, 18:16:19 _stacktrace: TracebackType | None = None, 18:16:19 ) -> Self: 18:16:19 """Return a new Retry object with incremented retry counters. 18:16:19 18:16:19 :param response: A response object, or None, if the server did not 18:16:19 return a response. 18:16:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 18:16:19 :param Exception error: An error encountered during the request, or 18:16:19 None if the response was received successfully. 18:16:19 18:16:19 :return: A new ``Retry`` object. 18:16:19 """ 18:16:19 if self.total is False and error: 18:16:19 # Disabled, indicate to re-raise the error. 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 18:16:19 total = self.total 18:16:19 if total is not None: 18:16:19 total -= 1 18:16:19 18:16:19 connect = self.connect 18:16:19 read = self.read 18:16:19 redirect = self.redirect 18:16:19 status_count = self.status 18:16:19 other = self.other 18:16:19 cause = "unknown" 18:16:19 status = None 18:16:19 redirect_location = None 18:16:19 18:16:19 if error and self._is_connection_error(error): 18:16:19 # Connect retry? 18:16:19 if connect is False: 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif connect is not None: 18:16:19 connect -= 1 18:16:19 18:16:19 elif error and self._is_read_error(error): 18:16:19 # Read retry? 18:16:19 if read is False or method is None or not self._is_method_retryable(method): 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif read is not None: 18:16:19 read -= 1 18:16:19 18:16:19 elif error: 18:16:19 # Other retry? 18:16:19 if other is not None: 18:16:19 other -= 1 18:16:19 18:16:19 elif response and response.get_redirect_location(): 18:16:19 # Redirect retry? 18:16:19 if redirect is not None: 18:16:19 redirect -= 1 18:16:19 cause = "too many redirects" 18:16:19 response_redirect_location = response.get_redirect_location() 18:16:19 if response_redirect_location: 18:16:19 redirect_location = response_redirect_location 18:16:19 status = response.status 18:16:19 18:16:19 else: 18:16:19 # Incrementing because of a server error like a 500 in 18:16:19 # status_forcelist and the given method is in the allowed_methods 18:16:19 cause = ResponseError.GENERIC_ERROR 18:16:19 if response and response.status: 18:16:19 if status_count is not None: 18:16:19 status_count -= 1 18:16:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 18:16:19 status = response.status 18:16:19 18:16:19 history = self.history + ( 18:16:19 RequestHistory(method, url, error, status, redirect_location), 18:16:19 ) 18:16:19 18:16:19 new_retry = self.new( 18:16:19 total=total, 18:16:19 connect=connect, 18:16:19 read=read, 18:16:19 redirect=redirect, 18:16:19 status=status_count, 18:16:19 other=other, 18:16:19 history=history, 18:16:19 ) 18:16:19 18:16:19 if new_retry.is_exhausted(): 18:16:19 reason = error or ResponseError(cause) 18:16:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 18:16:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-network?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 18:16:19 18:16:19 During handling of the above exception, another exception occurred: 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_43_getOpenRoadmNetwork(self): 18:16:19 > response = test_utils.get_ietf_network_request('openroadm-network', 'config') 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:794: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 transportpce_tests/common/test_utils.py:495: in get_ietf_network_request 18:16:19 response = get_request(url[RESTCONF_VERSION].format(*format_args)) 18:16:19 transportpce_tests/common/test_utils.py:116: in get_request 18:16:19 return requests.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 18:16:19 return session.request(method=method, url=url, **kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 18:16:19 resp = self.send(prep, **send_kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 18:16:19 r = adapter.send(request, **kwargs) 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 except (ProtocolError, OSError) as err: 18:16:19 raise ConnectionError(err, request=request) 18:16:19 18:16:19 except MaxRetryError as e: 18:16:19 if isinstance(e.reason, ConnectTimeoutError): 18:16:19 # TODO: Remove this in 3.0.0: see #2811 18:16:19 if not isinstance(e.reason, NewConnectionError): 18:16:19 raise ConnectTimeout(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, ResponseError): 18:16:19 raise RetryError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _ProxyError): 18:16:19 raise ProxyError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _SSLError): 18:16:19 # This branch is for urllib3 v1.22 and later. 18:16:19 raise SSLError(e, request=request) 18:16:19 18:16:19 > raise ConnectionError(e, request=request) 18:16:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-network?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 18:16:19 ____ TransportPCETopologyTesting.test_44_check_roadm2roadm_link_persistence ____ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 > sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 18:16:19 raise err 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 address = ('localhost', 8182), timeout = 10, source_address = None 18:16:19 socket_options = [(6, 1, 1)] 18:16:19 18:16:19 def create_connection( 18:16:19 address: tuple[str, int], 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 source_address: tuple[str, int] | None = None, 18:16:19 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 18:16:19 ) -> socket.socket: 18:16:19 """Connect to *address* and return the socket object. 18:16:19 18:16:19 Convenience function. Connect to *address* (a 2-tuple ``(host, 18:16:19 port)``) and return the socket object. Passing the optional 18:16:19 *timeout* parameter will set the timeout on the socket instance 18:16:19 before attempting to connect. If no *timeout* is supplied, the 18:16:19 global default timeout setting returned by :func:`socket.getdefaulttimeout` 18:16:19 is used. If *source_address* is set it must be a tuple of (host, port) 18:16:19 for the socket to bind as a source address before making the connection. 18:16:19 An host of '' or port 0 tells the OS to use the default. 18:16:19 """ 18:16:19 18:16:19 host, port = address 18:16:19 if host.startswith("["): 18:16:19 host = host.strip("[]") 18:16:19 err = None 18:16:19 18:16:19 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 18:16:19 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 18:16:19 # The original create_connection function always returns all records. 18:16:19 family = allowed_gai_family() 18:16:19 18:16:19 try: 18:16:19 host.encode("idna") 18:16:19 except UnicodeError: 18:16:19 raise LocationParseError(f"'{host}', label empty or too long") from None 18:16:19 18:16:19 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 18:16:19 af, socktype, proto, canonname, sa = res 18:16:19 sock = None 18:16:19 try: 18:16:19 sock = socket.socket(af, socktype, proto) 18:16:19 18:16:19 # If provided, set socket level options before connecting. 18:16:19 _set_socket_options(sock, socket_options) 18:16:19 18:16:19 if timeout is not _DEFAULT_TIMEOUT: 18:16:19 sock.settimeout(timeout) 18:16:19 if source_address: 18:16:19 sock.bind(source_address) 18:16:19 > sock.connect(sa) 18:16:19 E ConnectionRefusedError: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 method = 'GET' 18:16:19 url = '/rests/data/ietf-network:networks/network=openroadm-topology?content=config' 18:16:19 body = None 18:16:19 headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 18:16:19 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 redirect = False, assert_same_host = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None 18:16:19 release_conn = False, chunked = False, body_pos = None, preload_content = False 18:16:19 decode_content = False, response_kw = {} 18:16:19 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/ietf-network:networks/network=openroadm-topology', query='content=config', fragment=None) 18:16:19 destination_scheme = None, conn = None, release_this_conn = True 18:16:19 http_tunnel_required = False, err = None, clean_exit = False 18:16:19 18:16:19 def urlopen( # type: ignore[override] 18:16:19 self, 18:16:19 method: str, 18:16:19 url: str, 18:16:19 body: _TYPE_BODY | None = None, 18:16:19 headers: typing.Mapping[str, str] | None = None, 18:16:19 retries: Retry | bool | int | None = None, 18:16:19 redirect: bool = True, 18:16:19 assert_same_host: bool = True, 18:16:19 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 18:16:19 pool_timeout: int | None = None, 18:16:19 release_conn: bool | None = None, 18:16:19 chunked: bool = False, 18:16:19 body_pos: _TYPE_BODY_POSITION | None = None, 18:16:19 preload_content: bool = True, 18:16:19 decode_content: bool = True, 18:16:19 **response_kw: typing.Any, 18:16:19 ) -> BaseHTTPResponse: 18:16:19 """ 18:16:19 Get a connection from the pool and perform an HTTP request. This is the 18:16:19 lowest level call for making a request, so you'll need to specify all 18:16:19 the raw details. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 More commonly, it's appropriate to use a convenience method 18:16:19 such as :meth:`request`. 18:16:19 18:16:19 .. note:: 18:16:19 18:16:19 `release_conn` will only behave as expected if 18:16:19 `preload_content=False` because we want to make 18:16:19 `preload_content=False` the default behaviour someday soon without 18:16:19 breaking backwards compatibility. 18:16:19 18:16:19 :param method: 18:16:19 HTTP request method (such as GET, POST, PUT, etc.) 18:16:19 18:16:19 :param url: 18:16:19 The URL to perform the request on. 18:16:19 18:16:19 :param body: 18:16:19 Data to send in the request body, either :class:`str`, :class:`bytes`, 18:16:19 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 18:16:19 18:16:19 :param headers: 18:16:19 Dictionary of custom headers to send, such as User-Agent, 18:16:19 If-None-Match, etc. If None, pool headers are used. If provided, 18:16:19 these headers completely replace any pool-specific headers. 18:16:19 18:16:19 :param retries: 18:16:19 Configure the number of retries to allow before raising a 18:16:19 :class:`~urllib3.exceptions.MaxRetryError` exception. 18:16:19 18:16:19 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 18:16:19 :class:`~urllib3.util.retry.Retry` object for fine-grained control 18:16:19 over different types of retries. 18:16:19 Pass an integer number to retry connection errors that many times, 18:16:19 but no other types of errors. Pass zero to never retry. 18:16:19 18:16:19 If ``False``, then retries are disabled and any exception is raised 18:16:19 immediately. Also, instead of raising a MaxRetryError on redirects, 18:16:19 the redirect response will be returned. 18:16:19 18:16:19 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 18:16:19 18:16:19 :param redirect: 18:16:19 If True, automatically handle redirects (status codes 301, 302, 18:16:19 303, 307, 308). Each redirect counts as a retry. Disabling retries 18:16:19 will disable redirect, too. 18:16:19 18:16:19 :param assert_same_host: 18:16:19 If ``True``, will make sure that the host of the pool requests is 18:16:19 consistent else will raise HostChangedError. When ``False``, you can 18:16:19 use the pool on an HTTP proxy and request foreign hosts. 18:16:19 18:16:19 :param timeout: 18:16:19 If specified, overrides the default timeout for this one 18:16:19 request. It may be a float (in seconds) or an instance of 18:16:19 :class:`urllib3.util.Timeout`. 18:16:19 18:16:19 :param pool_timeout: 18:16:19 If set and the pool is set to block=True, then this method will 18:16:19 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 18:16:19 connection is available within the time period. 18:16:19 18:16:19 :param bool preload_content: 18:16:19 If True, the response's body will be preloaded into memory. 18:16:19 18:16:19 :param bool decode_content: 18:16:19 If True, will attempt to decode the body based on the 18:16:19 'content-encoding' header. 18:16:19 18:16:19 :param release_conn: 18:16:19 If False, then the urlopen call will not release the connection 18:16:19 back into the pool once a response is received (but will release if 18:16:19 you read the entire contents of the response such as when 18:16:19 `preload_content=True`). This is useful if you're not preloading 18:16:19 the response's content immediately. You will need to call 18:16:19 ``r.release_conn()`` on the response ``r`` to return the connection 18:16:19 back into the pool. If None, it takes the value of ``preload_content`` 18:16:19 which defaults to ``True``. 18:16:19 18:16:19 :param bool chunked: 18:16:19 If True, urllib3 will send the body using chunked transfer 18:16:19 encoding. Otherwise, urllib3 will send the body using the standard 18:16:19 content-length form. Defaults to False. 18:16:19 18:16:19 :param int body_pos: 18:16:19 Position to seek to in file-like body in the event of a retry or 18:16:19 redirect. Typically this won't need to be set because urllib3 will 18:16:19 auto-populate the value when needed. 18:16:19 """ 18:16:19 parsed_url = parse_url(url) 18:16:19 destination_scheme = parsed_url.scheme 18:16:19 18:16:19 if headers is None: 18:16:19 headers = self.headers 18:16:19 18:16:19 if not isinstance(retries, Retry): 18:16:19 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 18:16:19 18:16:19 if release_conn is None: 18:16:19 release_conn = preload_content 18:16:19 18:16:19 # Check host 18:16:19 if assert_same_host and not self.is_same_host(url): 18:16:19 raise HostChangedError(self, url, retries) 18:16:19 18:16:19 # Ensure that the URL we're connecting to is properly encoded 18:16:19 if url.startswith("/"): 18:16:19 url = to_str(_encode_target(url)) 18:16:19 else: 18:16:19 url = to_str(parsed_url.url) 18:16:19 18:16:19 conn = None 18:16:19 18:16:19 # Track whether `conn` needs to be released before 18:16:19 # returning/raising/recursing. Update this variable if necessary, and 18:16:19 # leave `release_conn` constant throughout the function. That way, if 18:16:19 # the function recurses, the original value of `release_conn` will be 18:16:19 # passed down into the recursive call, and its value will be respected. 18:16:19 # 18:16:19 # See issue #651 [1] for details. 18:16:19 # 18:16:19 # [1] 18:16:19 release_this_conn = release_conn 18:16:19 18:16:19 http_tunnel_required = connection_requires_http_tunnel( 18:16:19 self.proxy, self.proxy_config, destination_scheme 18:16:19 ) 18:16:19 18:16:19 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 18:16:19 # have to copy the headers dict so we can safely change it without those 18:16:19 # changes being reflected in anyone else's copy. 18:16:19 if not http_tunnel_required: 18:16:19 headers = headers.copy() # type: ignore[attr-defined] 18:16:19 headers.update(self.proxy_headers) # type: ignore[union-attr] 18:16:19 18:16:19 # Must keep the exception bound to a separate variable or else Python 3 18:16:19 # complains about UnboundLocalError. 18:16:19 err = None 18:16:19 18:16:19 # Keep track of whether we cleanly exited the except block. This 18:16:19 # ensures we do proper cleanup in finally. 18:16:19 clean_exit = False 18:16:19 18:16:19 # Rewind body position, if needed. Record current position 18:16:19 # for future rewinds in the event of a redirect/retry. 18:16:19 body_pos = set_file_position(body, body_pos) 18:16:19 18:16:19 try: 18:16:19 # Request a connection from the queue. 18:16:19 timeout_obj = self._get_timeout(timeout) 18:16:19 conn = self._get_conn(timeout=pool_timeout) 18:16:19 18:16:19 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 18:16:19 18:16:19 # Is this a closed/new connection that requires CONNECT tunnelling? 18:16:19 if self.proxy is not None and http_tunnel_required and conn.is_closed: 18:16:19 try: 18:16:19 self._prepare_proxy(conn) 18:16:19 except (BaseSSLError, OSError, SocketTimeout) as e: 18:16:19 self._raise_timeout( 18:16:19 err=e, url=self.proxy.url, timeout_value=conn.timeout 18:16:19 ) 18:16:19 raise 18:16:19 18:16:19 # If we're going to release the connection in ``finally:``, then 18:16:19 # the response doesn't need to know about the connection. Otherwise 18:16:19 # it will also try to release it and we'll have a double-release 18:16:19 # mess. 18:16:19 response_conn = conn if not release_conn else None 18:16:19 18:16:19 # Make the request on the HTTPConnection object 18:16:19 > response = self._make_request( 18:16:19 conn, 18:16:19 method, 18:16:19 url, 18:16:19 timeout=timeout_obj, 18:16:19 body=body, 18:16:19 headers=headers, 18:16:19 chunked=chunked, 18:16:19 retries=retries, 18:16:19 response_conn=response_conn, 18:16:19 preload_content=preload_content, 18:16:19 decode_content=decode_content, 18:16:19 **response_kw, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request 18:16:19 conn.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request 18:16:19 self.endheaders() 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders 18:16:19 self._send_output(message_body, encode_chunked=encode_chunked) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output 18:16:19 self.send(msg) 18:16:19 /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send 18:16:19 self.connect() 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect 18:16:19 self.sock = self._new_conn() 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 18:16:19 def _new_conn(self) -> socket.socket: 18:16:19 """Establish a socket connection and set nodelay settings on it. 18:16:19 18:16:19 :return: New socket connection. 18:16:19 """ 18:16:19 try: 18:16:19 sock = connection.create_connection( 18:16:19 (self._dns_host, self.port), 18:16:19 self.timeout, 18:16:19 source_address=self.source_address, 18:16:19 socket_options=self.socket_options, 18:16:19 ) 18:16:19 except socket.gaierror as e: 18:16:19 raise NameResolutionError(self.host, self, e) from e 18:16:19 except SocketTimeout as e: 18:16:19 raise ConnectTimeoutError( 18:16:19 self, 18:16:19 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 18:16:19 ) from e 18:16:19 18:16:19 except OSError as e: 18:16:19 > raise NewConnectionError( 18:16:19 self, f"Failed to establish a new connection: {e}" 18:16:19 ) from e 18:16:19 E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError 18:16:19 18:16:19 The above exception was the direct cause of the following exception: 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 > resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen 18:16:19 retries = retries.increment( 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 18:16:19 method = 'GET' 18:16:19 url = '/rests/data/ietf-network:networks/network=openroadm-topology?content=config' 18:16:19 response = None 18:16:19 error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 18:16:19 _pool = 18:16:19 _stacktrace = 18:16:19 18:16:19 def increment( 18:16:19 self, 18:16:19 method: str | None = None, 18:16:19 url: str | None = None, 18:16:19 response: BaseHTTPResponse | None = None, 18:16:19 error: Exception | None = None, 18:16:19 _pool: ConnectionPool | None = None, 18:16:19 _stacktrace: TracebackType | None = None, 18:16:19 ) -> Self: 18:16:19 """Return a new Retry object with incremented retry counters. 18:16:19 18:16:19 :param response: A response object, or None, if the server did not 18:16:19 return a response. 18:16:19 :type response: :class:`~urllib3.response.BaseHTTPResponse` 18:16:19 :param Exception error: An error encountered during the request, or 18:16:19 None if the response was received successfully. 18:16:19 18:16:19 :return: A new ``Retry`` object. 18:16:19 """ 18:16:19 if self.total is False and error: 18:16:19 # Disabled, indicate to re-raise the error. 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 18:16:19 total = self.total 18:16:19 if total is not None: 18:16:19 total -= 1 18:16:19 18:16:19 connect = self.connect 18:16:19 read = self.read 18:16:19 redirect = self.redirect 18:16:19 status_count = self.status 18:16:19 other = self.other 18:16:19 cause = "unknown" 18:16:19 status = None 18:16:19 redirect_location = None 18:16:19 18:16:19 if error and self._is_connection_error(error): 18:16:19 # Connect retry? 18:16:19 if connect is False: 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif connect is not None: 18:16:19 connect -= 1 18:16:19 18:16:19 elif error and self._is_read_error(error): 18:16:19 # Read retry? 18:16:19 if read is False or method is None or not self._is_method_retryable(method): 18:16:19 raise reraise(type(error), error, _stacktrace) 18:16:19 elif read is not None: 18:16:19 read -= 1 18:16:19 18:16:19 elif error: 18:16:19 # Other retry? 18:16:19 if other is not None: 18:16:19 other -= 1 18:16:19 18:16:19 elif response and response.get_redirect_location(): 18:16:19 # Redirect retry? 18:16:19 if redirect is not None: 18:16:19 redirect -= 1 18:16:19 cause = "too many redirects" 18:16:19 response_redirect_location = response.get_redirect_location() 18:16:19 if response_redirect_location: 18:16:19 redirect_location = response_redirect_location 18:16:19 status = response.status 18:16:19 18:16:19 else: 18:16:19 # Incrementing because of a server error like a 500 in 18:16:19 # status_forcelist and the given method is in the allowed_methods 18:16:19 cause = ResponseError.GENERIC_ERROR 18:16:19 if response and response.status: 18:16:19 if status_count is not None: 18:16:19 status_count -= 1 18:16:19 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 18:16:19 status = response.status 18:16:19 18:16:19 history = self.history + ( 18:16:19 RequestHistory(method, url, error, status, redirect_location), 18:16:19 ) 18:16:19 18:16:19 new_retry = self.new( 18:16:19 total=total, 18:16:19 connect=connect, 18:16:19 read=read, 18:16:19 redirect=redirect, 18:16:19 status=status_count, 18:16:19 other=other, 18:16:19 history=history, 18:16:19 ) 18:16:19 18:16:19 if new_retry.is_exhausted(): 18:16:19 reason = error or ResponseError(cause) 18:16:19 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 18:16:19 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-topology?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError 18:16:19 18:16:19 During handling of the above exception, another exception occurred: 18:16:19 18:16:19 self = 18:16:19 18:16:19 def test_44_check_roadm2roadm_link_persistence(self): 18:16:19 > response = test_utils.get_ietf_network_request('openroadm-topology', 'config') 18:16:19 18:16:19 transportpce_tests/1.2.1/test03_topology.py:800: 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 transportpce_tests/common/test_utils.py:495: in get_ietf_network_request 18:16:19 response = get_request(url[RESTCONF_VERSION].format(*format_args)) 18:16:19 transportpce_tests/common/test_utils.py:116: in get_request 18:16:19 return requests.request( 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request 18:16:19 return session.request(method=method, url=url, **kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request 18:16:19 resp = self.send(prep, **send_kwargs) 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send 18:16:19 r = adapter.send(request, **kwargs) 18:16:19 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 18:16:19 18:16:19 self = 18:16:19 request = , stream = False 18:16:19 timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None 18:16:19 proxies = OrderedDict() 18:16:19 18:16:19 def send( 18:16:19 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 18:16:19 ): 18:16:19 """Sends PreparedRequest object. Returns Response object. 18:16:19 18:16:19 :param request: The :class:`PreparedRequest ` being sent. 18:16:19 :param stream: (optional) Whether to stream the request content. 18:16:19 :param timeout: (optional) How long to wait for the server to send 18:16:19 data before giving up, as a float, or a :ref:`(connect timeout, 18:16:19 read timeout) ` tuple. 18:16:19 :type timeout: float or tuple or urllib3 Timeout object 18:16:19 :param verify: (optional) Either a boolean, in which case it controls whether 18:16:19 we verify the server's TLS certificate, or a string, in which case it 18:16:19 must be a path to a CA bundle to use 18:16:19 :param cert: (optional) Any user-provided SSL certificate to be trusted. 18:16:19 :param proxies: (optional) The proxies dictionary to apply to the request. 18:16:19 :rtype: requests.Response 18:16:19 """ 18:16:19 18:16:19 try: 18:16:19 conn = self.get_connection_with_tls_context( 18:16:19 request, verify, proxies=proxies, cert=cert 18:16:19 ) 18:16:19 except LocationValueError as e: 18:16:19 raise InvalidURL(e, request=request) 18:16:19 18:16:19 self.cert_verify(conn, request.url, verify, cert) 18:16:19 url = self.request_url(request, proxies) 18:16:19 self.add_headers( 18:16:19 request, 18:16:19 stream=stream, 18:16:19 timeout=timeout, 18:16:19 verify=verify, 18:16:19 cert=cert, 18:16:19 proxies=proxies, 18:16:19 ) 18:16:19 18:16:19 chunked = not (request.body is None or "Content-Length" in request.headers) 18:16:19 18:16:19 if isinstance(timeout, tuple): 18:16:19 try: 18:16:19 connect, read = timeout 18:16:19 timeout = TimeoutSauce(connect=connect, read=read) 18:16:19 except ValueError: 18:16:19 raise ValueError( 18:16:19 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 18:16:19 f"or a single float to set both timeouts to the same value." 18:16:19 ) 18:16:19 elif isinstance(timeout, TimeoutSauce): 18:16:19 pass 18:16:19 else: 18:16:19 timeout = TimeoutSauce(connect=timeout, read=timeout) 18:16:19 18:16:19 try: 18:16:19 resp = conn.urlopen( 18:16:19 method=request.method, 18:16:19 url=url, 18:16:19 body=request.body, 18:16:19 headers=request.headers, 18:16:19 redirect=False, 18:16:19 assert_same_host=False, 18:16:19 preload_content=False, 18:16:19 decode_content=False, 18:16:19 retries=self.max_retries, 18:16:19 timeout=timeout, 18:16:19 chunked=chunked, 18:16:19 ) 18:16:19 18:16:19 except (ProtocolError, OSError) as err: 18:16:19 raise ConnectionError(err, request=request) 18:16:19 18:16:19 except MaxRetryError as e: 18:16:19 if isinstance(e.reason, ConnectTimeoutError): 18:16:19 # TODO: Remove this in 3.0.0: see #2811 18:16:19 if not isinstance(e.reason, NewConnectionError): 18:16:19 raise ConnectTimeout(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, ResponseError): 18:16:19 raise RetryError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _ProxyError): 18:16:19 raise ProxyError(e, request=request) 18:16:19 18:16:19 if isinstance(e.reason, _SSLError): 18:16:19 # This branch is for urllib3 v1.22 and later. 18:16:19 raise SSLError(e, request=request) 18:16:19 18:16:19 > raise ConnectionError(e, request=request) 18:16:19 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-topology?content=config (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 18:16:19 18:16:19 ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError 18:16:19 --------------------------- Captured stdout teardown --------------------------- 18:16:19 all processes killed 18:16:19 =========================== short test summary info ============================ 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_02_getClliNetwork 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_03_getOpenRoadmNetwork 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_04_getLinks_OpenroadmTopology 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_05_getNodes_OpenRoadmTopology 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_08_getOpenRoadmNetwork 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_09_getNodes_OpenRoadmTopology 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_10_connect_tail_xpdr_rdm 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_11_connect_tail_rdm_xpdr 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_12_getLinks_OpenRoadmTopology 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_16_getClliNetwork 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_17_getOpenRoadmNetwork 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_18_getROADMLinkOpenRoadmTopology 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_19_getLinkOmsAttributesOpenRoadmTopology 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_20_getNodes_OpenRoadmTopology 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_22_omsAttributes_ROADMA_ROADMB 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_23_omsAttributes_ROADMB_ROADMA 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_24_omsAttributes_ROADMB_ROADMC 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_25_omsAttributes_ROADMC_ROADMB 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_26_getClliNetwork 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_27_verifyDegree 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_28_verifyOppositeLinkTopology 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_29_getLinkOmsAttributesOpenRoadmTopology 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_30_disconnect_ROADMB 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_31_disconnect_ROADMC 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_32_getNodes_OpenRoadmTopology 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_33_getOpenRoadmNetwork 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_34_getClliNetwork 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_35_disconnect_XPDRA 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_36_getClliNetwork 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_37_getOpenRoadmNetwork 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_38_getNodes_OpenRoadmTopology 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_39_disconnect_ROADM_XPDRA_link 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_40_getLinks_OpenRoadmTopology 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_41_disconnect_ROADMA 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_42_getClliNetwork 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_43_getOpenRoadmNetwork 18:16:19 FAILED transportpce_tests/1.2.1/test03_topology.py::TransportPCETopologyTesting::test_44_check_roadm2roadm_link_persistence 18:16:19 37 failed, 7 passed in 856.81s (0:14:16) 18:16:19 tests221: FAIL ✖ in 4 minutes 36.39 seconds 18:16:19 tests121: exit 1 (1343.00 seconds) /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 1.2.1 pid=35233 18:16:19 tests121: FAIL ✖ in 22 minutes 30.07 seconds 18:16:19 tests_hybrid: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 18:16:25 tests_hybrid: freeze> python -m pip freeze --all 18:16:26 tests_hybrid: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 18:16:26 tests_hybrid: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh hybrid 18:16:26 using environment variables from ./karaf121.env 18:16:26 pytest -q transportpce_tests/hybrid/test01_device_change_notifications.py 18:17:10 ................................................... [100%] 18:21:56 51 passed in 330.37s (0:05:30) 18:21:56 pytest -q transportpce_tests/hybrid/test02_B100G_end2end.py 18:22:39 ........................................................................ [ 66%] 18:27:00 ..................................... [100%] 18:29:06 109 passed in 429.99s (0:07:09) 18:29:06 pytest -q transportpce_tests/hybrid/test03_autonomous_reroute.py 18:29:54 ..................................................... [100%] 18:33:26 53 passed in 259.00s (0:04:18) 18:33:26 tests_hybrid: OK ✔ in 17 minutes 6.79 seconds 18:33:26 buildlighty: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 18:33:31 buildlighty: freeze> python -m pip freeze --all 18:33:32 buildlighty: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 18:33:32 buildlighty: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/lighty> ./build.sh 18:33:32 NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED 18:33:48 [ERROR] COMPILATION ERROR : 18:33:48 [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[17,42] cannot find symbol 18:33:48 symbol: class YangModuleInfo 18:33:48 location: package org.opendaylight.yangtools.binding 18:33:48 [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[21,30] cannot find symbol 18:33:48 symbol: class YangModuleInfo 18:33:48 location: class io.lighty.controllers.tpce.utils.TPCEUtils 18:33:48 [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[343,30] cannot find symbol 18:33:48 symbol: class YangModuleInfo 18:33:48 location: class io.lighty.controllers.tpce.utils.TPCEUtils 18:33:48 [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[350,23] cannot find symbol 18:33:48 symbol: class YangModuleInfo 18:33:48 location: class io.lighty.controllers.tpce.utils.TPCEUtils 18:33:48 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.13.0:compile (default-compile) on project tpce: Compilation failure: Compilation failure: 18:33:48 [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[17,42] cannot find symbol 18:33:48 [ERROR] symbol: class YangModuleInfo 18:33:48 [ERROR] location: package org.opendaylight.yangtools.binding 18:33:48 [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[21,30] cannot find symbol 18:33:48 [ERROR] symbol: class YangModuleInfo 18:33:48 [ERROR] location: class io.lighty.controllers.tpce.utils.TPCEUtils 18:33:48 [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[343,30] cannot find symbol 18:33:48 [ERROR] symbol: class YangModuleInfo 18:33:48 [ERROR] location: class io.lighty.controllers.tpce.utils.TPCEUtils 18:33:48 [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[350,23] cannot find symbol 18:33:48 [ERROR] symbol: class YangModuleInfo 18:33:48 [ERROR] location: class io.lighty.controllers.tpce.utils.TPCEUtils 18:33:48 [ERROR] -> [Help 1] 18:33:48 [ERROR] 18:33:48 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. 18:33:48 [ERROR] Re-run Maven using the -X switch to enable full debug logging. 18:33:48 [ERROR] 18:33:48 [ERROR] For more information about the errors and possible solutions, please read the following articles: 18:33:48 [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException 18:33:48 unzip: cannot find or open target/tpce-bin.zip, target/tpce-bin.zip.zip or target/tpce-bin.zip.ZIP. 18:33:48 buildlighty: exit 9 (16.34 seconds) /w/workspace/transportpce-tox-verify-transportpce-master/lighty> ./build.sh pid=51016 18:33:48 buildlighty: command failed but is marked ignore outcome so handling it as success 18:33:48 buildcontroller: OK (104.28=setup[7.71]+cmd[96.57] seconds) 18:33:48 testsPCE: OK (310.55=setup[66.49]+cmd[244.06] seconds) 18:33:48 sims: OK (9.22=setup[6.73]+cmd[2.48] seconds) 18:33:48 build_karaf_tests121: OK (54.71=setup[6.83]+cmd[47.88] seconds) 18:33:48 tests121: FAIL code 1 (1350.07=setup[7.07]+cmd[1343.00] seconds) 18:33:48 build_karaf_tests221: OK (53.21=setup[6.85]+cmd[46.36] seconds) 18:33:48 tests_tapi: FAIL code 1 (688.45=setup[6.16]+cmd[682.29] seconds) 18:33:48 tests_network: FAIL code 1 (160.94=setup[6.66]+cmd[154.28] seconds) 18:33:48 tests221: FAIL code 1 (276.39=setup[6.67]+cmd[269.72] seconds) 18:33:48 build_karaf_tests71: OK (53.85=setup[13.83]+cmd[40.02] seconds) 18:33:48 tests71: OK (416.20=setup[5.70]+cmd[410.49] seconds) 18:33:48 build_karaf_tests_hybrid: OK (53.75=setup[7.69]+cmd[46.06] seconds) 18:33:48 tests_hybrid: OK (1026.79=setup[6.73]+cmd[1020.06] seconds) 18:33:48 buildlighty: OK (22.27=setup[5.93]+cmd[16.34] seconds) 18:33:48 docs: OK (33.87=setup[31.26]+cmd[2.62] seconds) 18:33:48 docs-linkcheck: OK (35.36=setup[31.82]+cmd[3.54] seconds) 18:33:48 checkbashisms: OK (3.12=setup[2.11]+cmd[0.01,0.05,0.94] seconds) 18:33:48 pre-commit: FAIL code 1 (38.39=setup[3.36]+cmd[0.00,0.01,35.02] seconds) 18:33:48 pylint: FAIL code 1 (26.46=setup[5.56]+cmd[20.90] seconds) 18:33:48 evaluation failed :( (2814.04 seconds) 18:33:48 + tox_status=255 18:33:48 + echo '---> Completed tox runs' 18:33:48 ---> Completed tox runs 18:33:48 + for i in .tox/*/log 18:33:48 ++ echo .tox/build_karaf_tests121/log 18:33:48 ++ awk -F/ '{print $2}' 18:33:48 + tox_env=build_karaf_tests121 18:33:48 + cp -r .tox/build_karaf_tests121/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests121 18:33:48 + for i in .tox/*/log 18:33:48 ++ echo .tox/build_karaf_tests221/log 18:33:48 ++ awk -F/ '{print $2}' 18:33:48 + tox_env=build_karaf_tests221 18:33:48 + cp -r .tox/build_karaf_tests221/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests221 18:33:48 + for i in .tox/*/log 18:33:48 ++ echo .tox/build_karaf_tests71/log 18:33:48 ++ awk -F/ '{print $2}' 18:33:48 + tox_env=build_karaf_tests71 18:33:48 + cp -r .tox/build_karaf_tests71/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests71 18:33:48 + for i in .tox/*/log 18:33:48 ++ echo .tox/build_karaf_tests_hybrid/log 18:33:48 ++ awk -F/ '{print $2}' 18:33:48 + tox_env=build_karaf_tests_hybrid 18:33:48 + cp -r .tox/build_karaf_tests_hybrid/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests_hybrid 18:33:48 + for i in .tox/*/log 18:33:48 ++ echo .tox/buildcontroller/log 18:33:48 ++ awk -F/ '{print $2}' 18:33:48 + tox_env=buildcontroller 18:33:48 + cp -r .tox/buildcontroller/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/buildcontroller 18:33:48 + for i in .tox/*/log 18:33:48 ++ echo .tox/buildlighty/log 18:33:48 ++ awk -F/ '{print $2}' 18:33:48 + tox_env=buildlighty 18:33:48 + cp -r .tox/buildlighty/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/buildlighty 18:33:48 + for i in .tox/*/log 18:33:48 ++ echo .tox/checkbashisms/log 18:33:48 ++ awk -F/ '{print $2}' 18:33:48 + tox_env=checkbashisms 18:33:48 + cp -r .tox/checkbashisms/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/checkbashisms 18:33:48 + for i in .tox/*/log 18:33:48 ++ echo .tox/docs-linkcheck/log 18:33:48 ++ awk -F/ '{print $2}' 18:33:48 + tox_env=docs-linkcheck 18:33:48 + cp -r .tox/docs-linkcheck/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/docs-linkcheck 18:33:48 + for i in .tox/*/log 18:33:48 ++ echo .tox/docs/log 18:33:48 ++ awk -F/ '{print $2}' 18:33:48 + tox_env=docs 18:33:48 + cp -r .tox/docs/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/docs 18:33:48 + for i in .tox/*/log 18:33:48 ++ echo .tox/pre-commit/log 18:33:48 ++ awk -F/ '{print $2}' 18:33:48 + tox_env=pre-commit 18:33:48 + cp -r .tox/pre-commit/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/pre-commit 18:33:48 + for i in .tox/*/log 18:33:48 ++ echo .tox/pylint/log 18:33:48 ++ awk -F/ '{print $2}' 18:33:48 + tox_env=pylint 18:33:48 + cp -r .tox/pylint/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/pylint 18:33:48 + for i in .tox/*/log 18:33:48 ++ echo .tox/sims/log 18:33:48 ++ awk -F/ '{print $2}' 18:33:48 + tox_env=sims 18:33:48 + cp -r .tox/sims/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/sims 18:33:48 + for i in .tox/*/log 18:33:48 ++ echo .tox/tests121/log 18:33:48 ++ awk -F/ '{print $2}' 18:33:48 + tox_env=tests121 18:33:48 + cp -r .tox/tests121/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests121 18:33:48 + for i in .tox/*/log 18:33:48 ++ echo .tox/tests221/log 18:33:48 ++ awk -F/ '{print $2}' 18:33:48 + tox_env=tests221 18:33:48 + cp -r .tox/tests221/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests221 18:33:48 + for i in .tox/*/log 18:33:48 ++ echo .tox/tests71/log 18:33:48 ++ awk -F/ '{print $2}' 18:33:48 + tox_env=tests71 18:33:48 + cp -r .tox/tests71/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests71 18:33:48 + for i in .tox/*/log 18:33:48 ++ echo .tox/testsPCE/log 18:33:48 ++ awk -F/ '{print $2}' 18:33:48 + tox_env=testsPCE 18:33:48 + cp -r .tox/testsPCE/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/testsPCE 18:33:48 + for i in .tox/*/log 18:33:48 ++ echo .tox/tests_hybrid/log 18:33:48 ++ awk -F/ '{print $2}' 18:33:48 + tox_env=tests_hybrid 18:33:48 + cp -r .tox/tests_hybrid/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests_hybrid 18:33:48 + for i in .tox/*/log 18:33:48 ++ echo .tox/tests_network/log 18:33:48 ++ awk -F/ '{print $2}' 18:33:48 + tox_env=tests_network 18:33:48 + cp -r .tox/tests_network/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests_network 18:33:48 + for i in .tox/*/log 18:33:48 ++ echo .tox/tests_tapi/log 18:33:48 ++ awk -F/ '{print $2}' 18:33:48 + tox_env=tests_tapi 18:33:48 + cp -r .tox/tests_tapi/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests_tapi 18:33:48 + DOC_DIR=docs/_build/html 18:33:48 + [[ -d docs/_build/html ]] 18:33:48 + echo '---> Archiving generated docs' 18:33:48 ---> Archiving generated docs 18:33:48 + mv docs/_build/html /w/workspace/transportpce-tox-verify-transportpce-master/archives/docs 18:33:48 + echo '---> tox-run.sh ends' 18:33:48 ---> tox-run.sh ends 18:33:48 + test 255 -eq 0 18:33:48 + exit 255 18:33:48 ++ '[' 1 = 1 ']' 18:33:48 ++ '[' -x /usr/bin/clear_console ']' 18:33:48 ++ /usr/bin/clear_console -q 18:33:48 Build step 'Execute shell' marked build as failure 18:33:48 $ ssh-agent -k 18:33:48 unset SSH_AUTH_SOCK; 18:33:48 unset SSH_AGENT_PID; 18:33:48 echo Agent pid 12357 killed; 18:33:48 [ssh-agent] Stopped. 18:33:48 [PostBuildScript] - [INFO] Executing post build scripts. 18:33:48 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins3947665043239051346.sh 18:33:48 ---> sysstat.sh 18:33:49 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins3060957873958816489.sh 18:33:49 ---> package-listing.sh 18:33:49 ++ facter osfamily 18:33:49 ++ tr '[:upper:]' '[:lower:]' 18:33:49 + OS_FAMILY=debian 18:33:49 + workspace=/w/workspace/transportpce-tox-verify-transportpce-master 18:33:49 + START_PACKAGES=/tmp/packages_start.txt 18:33:49 + END_PACKAGES=/tmp/packages_end.txt 18:33:49 + DIFF_PACKAGES=/tmp/packages_diff.txt 18:33:49 + PACKAGES=/tmp/packages_start.txt 18:33:49 + '[' /w/workspace/transportpce-tox-verify-transportpce-master ']' 18:33:49 + PACKAGES=/tmp/packages_end.txt 18:33:49 + case "${OS_FAMILY}" in 18:33:49 + dpkg -l 18:33:49 + grep '^ii' 18:33:49 + '[' -f /tmp/packages_start.txt ']' 18:33:49 + '[' -f /tmp/packages_end.txt ']' 18:33:49 + diff /tmp/packages_start.txt /tmp/packages_end.txt 18:33:49 + '[' /w/workspace/transportpce-tox-verify-transportpce-master ']' 18:33:49 + mkdir -p /w/workspace/transportpce-tox-verify-transportpce-master/archives/ 18:33:49 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/transportpce-tox-verify-transportpce-master/archives/ 18:33:49 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins3077252992380207410.sh 18:33:49 ---> capture-instance-metadata.sh 18:33:49 Setup pyenv: 18:33:49 system 18:33:49 3.8.13 18:33:49 3.9.13 18:33:49 3.10.13 18:33:49 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 18:33:49 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-lNH7 from file:/tmp/.os_lf_venv 18:33:50 lf-activate-venv(): INFO: Installing: lftools 18:34:00 lf-activate-venv(): INFO: Adding /tmp/venv-lNH7/bin to PATH 18:34:00 INFO: Running in OpenStack, capturing instance metadata 18:34:01 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins2356166598035314070.sh 18:34:01 provisioning config files... 18:34:01 Could not find credentials [logs] for transportpce-tox-verify-transportpce-master #2083 18:34:01 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/transportpce-tox-verify-transportpce-master@tmp/config4219155273346933066tmp 18:34:01 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[odl-logs-s3-cloudfront-index] 18:34:01 Run condition [Regular expression match] enabling perform for step [Provide Configuration files] 18:34:01 provisioning config files... 18:34:01 copy managed file [jenkins-s3-log-ship] to file:/home/jenkins/.aws/credentials 18:34:01 [EnvInject] - Injecting environment variables from a build step. 18:34:01 [EnvInject] - Injecting as environment variables the properties content 18:34:01 SERVER_ID=logs 18:34:01 18:34:01 [EnvInject] - Variables injected successfully. 18:34:01 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins2982549915940757096.sh 18:34:01 ---> create-netrc.sh 18:34:01 WARN: Log server credential not found. 18:34:02 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins4331767113157310303.sh 18:34:02 ---> python-tools-install.sh 18:34:02 Setup pyenv: 18:34:02 system 18:34:02 3.8.13 18:34:02 3.9.13 18:34:02 3.10.13 18:34:02 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 18:34:02 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-lNH7 from file:/tmp/.os_lf_venv 18:34:03 lf-activate-venv(): INFO: Installing: lftools 18:34:11 lf-activate-venv(): INFO: Adding /tmp/venv-lNH7/bin to PATH 18:34:11 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins8123561885854945390.sh 18:34:11 ---> sudo-logs.sh 18:34:11 Archiving 'sudo' log.. 18:34:11 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins5240854540297603908.sh 18:34:11 ---> job-cost.sh 18:34:11 Setup pyenv: 18:34:11 system 18:34:11 3.8.13 18:34:11 3.9.13 18:34:11 3.10.13 18:34:11 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 18:34:12 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-lNH7 from file:/tmp/.os_lf_venv 18:34:12 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 18:34:17 lf-activate-venv(): INFO: Adding /tmp/venv-lNH7/bin to PATH 18:34:17 INFO: No Stack... 18:34:17 INFO: Retrieving Pricing Info for: v3-standard-4 18:34:18 INFO: Archiving Costs 18:34:18 [transportpce-tox-verify-transportpce-master] $ /bin/bash -l /tmp/jenkins6483386246709236208.sh 18:34:18 ---> logs-deploy.sh 18:34:18 Setup pyenv: 18:34:18 system 18:34:18 3.8.13 18:34:18 3.9.13 18:34:18 3.10.13 18:34:18 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 18:34:18 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-lNH7 from file:/tmp/.os_lf_venv 18:34:19 lf-activate-venv(): INFO: Installing: lftools 18:34:27 lf-activate-venv(): INFO: Adding /tmp/venv-lNH7/bin to PATH 18:34:27 WARNING: Nexus logging server not set 18:34:27 INFO: S3 path logs/releng/vex-yul-odl-jenkins-1/transportpce-tox-verify-transportpce-master/2083/ 18:34:27 INFO: archiving logs to S3 18:34:29 ---> uname -a: 18:34:29 Linux prd-ubuntu2004-docker-4c-16g-43174 5.4.0-190-generic #210-Ubuntu SMP Fri Jul 5 17:03:38 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 18:34:29 18:34:29 18:34:29 ---> lscpu: 18:34:29 Architecture: x86_64 18:34:29 CPU op-mode(s): 32-bit, 64-bit 18:34:29 Byte Order: Little Endian 18:34:29 Address sizes: 40 bits physical, 48 bits virtual 18:34:29 CPU(s): 4 18:34:29 On-line CPU(s) list: 0-3 18:34:29 Thread(s) per core: 1 18:34:29 Core(s) per socket: 1 18:34:29 Socket(s): 4 18:34:29 NUMA node(s): 1 18:34:29 Vendor ID: AuthenticAMD 18:34:29 CPU family: 23 18:34:29 Model: 49 18:34:29 Model name: AMD EPYC-Rome Processor 18:34:29 Stepping: 0 18:34:29 CPU MHz: 2800.000 18:34:29 BogoMIPS: 5600.00 18:34:29 Virtualization: AMD-V 18:34:29 Hypervisor vendor: KVM 18:34:29 Virtualization type: full 18:34:29 L1d cache: 128 KiB 18:34:29 L1i cache: 128 KiB 18:34:29 L2 cache: 2 MiB 18:34:29 L3 cache: 64 MiB 18:34:29 NUMA node0 CPU(s): 0-3 18:34:29 Vulnerability Gather data sampling: Not affected 18:34:29 Vulnerability Itlb multihit: Not affected 18:34:29 Vulnerability L1tf: Not affected 18:34:29 Vulnerability Mds: Not affected 18:34:29 Vulnerability Meltdown: Not affected 18:34:29 Vulnerability Mmio stale data: Not affected 18:34:29 Vulnerability Retbleed: Vulnerable 18:34:29 Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp 18:34:29 Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization 18:34:29 Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected 18:34:29 Vulnerability Srbds: Not affected 18:34:29 Vulnerability Tsx async abort: Not affected 18:34:29 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr wbnoinvd arat npt nrip_save umip rdpid arch_capabilities 18:34:29 18:34:29 18:34:29 ---> nproc: 18:34:29 4 18:34:29 18:34:29 18:34:29 ---> df -h: 18:34:29 Filesystem Size Used Avail Use% Mounted on 18:34:29 udev 7.8G 0 7.8G 0% /dev 18:34:29 tmpfs 1.6G 1.1M 1.6G 1% /run 18:34:29 /dev/vda1 78G 17G 62G 21% / 18:34:29 tmpfs 7.9G 0 7.9G 0% /dev/shm 18:34:29 tmpfs 5.0M 0 5.0M 0% /run/lock 18:34:29 tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup 18:34:29 /dev/loop1 68M 68M 0 100% /snap/lxd/22753 18:34:29 /dev/loop0 62M 62M 0 100% /snap/core20/1405 18:34:29 /dev/vda15 105M 6.1M 99M 6% /boot/efi 18:34:29 tmpfs 1.6G 0 1.6G 0% /run/user/1001 18:34:29 /dev/loop3 39M 39M 0 100% /snap/snapd/21759 18:34:29 /dev/loop4 92M 92M 0 100% /snap/lxd/29619 18:34:29 18:34:29 18:34:29 ---> free -m: 18:34:29 total used free shared buff/cache available 18:34:29 Mem: 15997 657 8095 1 7244 15000 18:34:29 Swap: 1023 0 1023 18:34:29 18:34:29 18:34:29 ---> ip addr: 18:34:29 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 18:34:29 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 18:34:29 inet 127.0.0.1/8 scope host lo 18:34:29 valid_lft forever preferred_lft forever 18:34:29 inet6 ::1/128 scope host 18:34:29 valid_lft forever preferred_lft forever 18:34:29 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 18:34:29 link/ether fa:16:3e:83:73:31 brd ff:ff:ff:ff:ff:ff 18:34:29 inet 10.30.171.178/23 brd 10.30.171.255 scope global dynamic ens3 18:34:29 valid_lft 83426sec preferred_lft 83426sec 18:34:29 inet6 fe80::f816:3eff:fe83:7331/64 scope link 18:34:29 valid_lft forever preferred_lft forever 18:34:29 3: docker0: mtu 1458 qdisc noqueue state DOWN group default 18:34:29 link/ether 02:42:ae:8f:ba:1a brd ff:ff:ff:ff:ff:ff 18:34:29 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 18:34:29 valid_lft forever preferred_lft forever 18:34:29 18:34:29 18:34:29 ---> sar -b -r -n DEV: 18:34:29 Linux 5.4.0-190-generic (prd-ubuntu2004-docker-4c-16g-43174) 10/16/24 _x86_64_ (4 CPU) 18:34:29 18:34:29 17:44:58 LINUX RESTART (4 CPU) 18:34:29 18:34:29 17:45:01 tps rtps wtps dtps bread/s bwrtn/s bdscd/s 18:34:29 17:46:02 340.57 158.43 182.14 0.00 11745.82 51631.19 0.00 18:34:29 17:47:01 135.33 37.77 97.56 0.00 1341.74 20940.33 0.00 18:34:29 17:48:01 242.58 40.61 201.97 0.00 2684.89 58395.73 0.00 18:34:29 17:49:01 87.25 1.52 85.74 0.00 81.32 39315.31 0.00 18:34:29 17:50:01 159.76 9.46 150.30 0.00 303.50 148668.98 0.00 18:34:29 17:51:01 240.76 6.37 234.39 0.00 2703.02 76699.08 0.00 18:34:29 17:52:01 83.01 2.97 80.04 0.00 216.06 8694.30 0.00 18:34:29 17:53:01 133.48 0.25 133.23 0.00 23.86 9431.09 0.00 18:34:29 17:54:01 86.40 0.87 85.54 0.00 67.59 8747.61 0.00 18:34:29 17:55:01 107.01 2.47 104.55 0.00 456.51 9664.91 0.00 18:34:29 17:56:01 2.88 0.00 2.88 0.00 0.00 65.32 0.00 18:34:29 17:57:01 39.39 0.37 39.02 0.00 19.06 842.79 0.00 18:34:29 17:58:01 63.23 0.00 63.23 0.00 0.00 919.73 0.00 18:34:29 17:59:01 76.74 0.00 76.74 0.00 0.00 1242.65 0.00 18:34:29 18:00:01 2.75 0.00 2.75 0.00 0.00 55.86 0.00 18:34:29 18:01:01 5.00 0.98 4.02 0.00 22.66 69.99 0.00 18:34:29 18:02:01 131.79 0.22 131.58 0.00 24.80 10345.34 0.00 18:34:29 18:03:01 84.10 0.02 84.09 0.00 0.27 1706.12 0.00 18:34:29 18:04:01 1.45 0.00 1.45 0.00 0.00 29.06 0.00 18:34:29 18:05:01 56.37 0.00 56.37 0.00 0.00 838.79 0.00 18:34:29 18:06:01 3.52 0.00 3.52 0.00 0.00 65.72 0.00 18:34:29 18:07:01 25.93 0.00 25.93 0.00 0.00 403.27 0.00 18:34:29 18:08:01 61.26 0.02 61.24 0.00 0.13 1027.30 0.00 18:34:29 18:09:01 137.35 0.03 137.32 0.00 0.27 6285.37 0.00 18:34:29 18:10:01 81.47 0.00 81.47 0.00 0.00 1359.73 0.00 18:34:29 18:11:01 87.94 0.00 87.94 0.00 0.00 1313.25 0.00 18:34:29 18:12:01 2.55 0.00 2.55 0.00 0.00 52.52 0.00 18:34:29 18:13:01 2.17 0.03 2.13 0.00 0.27 41.46 0.00 18:34:29 18:14:01 2.15 0.00 2.15 0.00 0.00 25.46 0.00 18:34:29 18:15:01 1.28 0.00 1.28 0.00 0.00 14.80 0.00 18:34:29 18:16:01 2.25 0.05 2.20 0.00 1.87 29.06 0.00 18:34:29 18:17:01 61.24 1.27 59.97 0.00 44.79 4005.87 0.00 18:34:29 18:18:01 67.41 1.23 66.17 0.00 27.46 6268.16 0.00 18:34:29 18:19:01 2.53 0.17 2.37 0.00 2.93 146.78 0.00 18:34:29 18:20:01 2.00 0.00 2.00 0.00 0.00 26.26 0.00 18:34:29 18:21:01 1.52 0.00 1.52 0.00 0.00 19.33 0.00 18:34:29 18:22:01 15.26 0.02 15.25 0.00 0.53 246.63 0.00 18:34:29 18:23:01 66.49 0.00 66.49 0.00 0.00 1058.22 0.00 18:34:29 18:24:01 2.23 0.02 2.22 0.00 0.53 49.46 0.00 18:34:29 18:25:01 1.70 0.00 1.70 0.00 0.00 31.32 0.00 18:34:29 18:26:01 2.25 0.00 2.25 0.00 0.00 38.53 0.00 18:34:29 18:27:01 1.70 0.00 1.70 0.00 0.00 34.66 0.00 18:34:29 18:28:01 2.03 0.08 1.95 0.00 1.33 38.26 0.00 18:34:29 18:29:01 1.48 0.00 1.48 0.00 0.00 35.33 0.00 18:34:29 18:30:01 71.50 0.00 71.50 0.00 0.00 1065.82 0.00 18:34:29 18:31:01 2.72 0.17 2.55 0.00 4.40 258.05 0.00 18:34:29 18:32:01 2.30 0.00 2.30 0.00 0.00 80.12 0.00 18:34:29 18:33:01 1.80 0.00 1.80 0.00 0.00 39.59 0.00 18:34:29 18:34:01 40.09 14.76 25.33 0.00 657.36 1620.80 0.00 18:34:29 Average: 57.81 5.71 52.10 0.00 416.72 9669.84 0.00 18:34:29 18:34:29 17:45:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 18:34:29 17:46:02 13583624 15432464 561064 3.43 52748 2004864 1277316 7.33 769572 1776616 97892 18:34:29 17:47:01 12801528 15267844 694964 4.24 89520 2547224 1575940 9.04 1016640 2242492 377640 18:34:29 17:48:01 10548536 14382072 1578304 9.63 136292 3765116 2291436 13.15 2063588 3342876 214088 18:34:29 17:49:01 7990500 13904924 2053580 12.54 174432 5712348 2848328 16.34 3026228 4839000 1335748 18:34:29 17:50:01 4477184 13138508 2816892 17.20 209028 8307828 3882928 22.28 4546784 6706656 1009756 18:34:29 17:51:01 3339432 12653260 3292292 20.10 225484 8922704 4436508 25.45 5485812 6871524 2760 18:34:29 17:52:01 648016 9193276 6750912 41.21 219244 8173880 7855464 45.07 8799272 6247584 1312 18:34:29 17:53:01 155384 8588708 7354668 44.90 225492 8057380 8702988 49.93 9409368 6130796 1496 18:34:29 17:54:01 617716 9095988 6847584 41.80 230484 8093200 7855432 45.07 8913960 6160492 74836 18:34:29 17:55:01 178668 7401512 8541168 52.14 236532 6852800 9741524 55.89 10533800 4994500 1136 18:34:29 17:56:01 179152 7402044 8540592 52.14 236536 6852808 9741524 55.89 10534140 4994140 88 18:34:29 17:57:01 5330256 12554504 3390564 20.70 236816 6853688 4474548 25.67 5409868 4985548 584 18:34:29 17:58:01 2248916 9475356 6468584 39.49 238676 6853960 7436728 42.67 8484052 4983468 104 18:34:29 17:59:01 2081668 9310332 6633372 40.49 240088 6854732 7559288 43.37 8670316 4962924 332 18:34:29 18:00:01 2050968 9279912 6663688 40.68 240132 6854968 7575288 43.46 8701660 4962860 332 18:34:29 18:01:01 5217520 12447364 3496380 21.34 240236 6855724 4508428 25.87 5545028 4961268 132 18:34:29 18:02:01 5987212 13459100 2485636 15.17 248552 7082892 3427156 19.66 4630400 5104100 2592 18:34:29 18:03:01 3142048 10616492 5327696 32.52 250040 7083808 6626264 38.02 7482640 5086520 44 18:34:29 18:04:01 3082232 10556828 5387332 32.89 250052 7083952 6642272 38.11 7542420 5086620 208 18:34:29 18:05:01 2568500 10044108 5899880 36.02 250924 7084064 7185916 41.23 8068900 5075940 288 18:34:29 18:06:01 2374420 9850292 6093396 37.20 250936 7084320 7284788 41.79 8260016 5075472 56 18:34:29 18:07:01 3522004 10998176 4946156 30.19 251048 7084464 6397312 36.70 7117240 5074568 512 18:34:29 18:08:01 4600356 12133196 3810976 23.26 255408 7132384 5075444 29.12 5992084 5120620 50544 18:34:29 18:09:01 546056 8158812 7783388 47.51 258256 7205432 9400456 53.93 9957848 5191608 2148 18:34:29 18:10:01 1164188 8778456 7164128 43.73 259080 7206080 8660260 49.69 9348516 5186832 500 18:34:29 18:11:01 177436 7323868 8618680 52.61 260376 6747460 10455984 59.99 10763920 4762468 316 18:34:29 18:12:01 183360 6923564 9018964 55.06 260332 6350044 10556384 60.56 11128828 4399084 168 18:34:29 18:13:01 5343648 11897308 4046164 24.70 260344 6168908 5048792 28.97 6148336 4240184 208 18:34:29 18:14:01 5343396 11897064 4046376 24.70 260352 6168908 5048792 28.97 6148428 4240148 32 18:34:29 18:15:01 5343712 11897384 4046056 24.70 260352 6168912 5048792 28.97 6148716 4240152 256 18:34:29 18:16:01 5335364 11889112 4054272 24.75 260360 6168964 5048792 28.97 6157224 4240024 68 18:34:29 18:17:01 5783624 12577312 3367548 20.56 266712 6396892 4937504 28.33 5531436 4416792 157516 18:34:29 18:18:01 4666224 11463428 4480504 27.35 267376 6399548 5283076 30.31 6649376 4410904 124 18:34:29 18:19:01 4556424 11353944 4589996 28.02 267384 6399872 5347460 30.68 6757404 4411060 208 18:34:29 18:20:01 4555668 11353188 4590784 28.02 267384 6399872 5347460 30.68 6756764 4411068 76 18:34:29 18:21:01 4555772 11353296 4590676 28.02 267384 6399876 5347460 30.68 6757232 4411072 68 18:34:29 18:22:01 8237976 15035632 910428 5.56 267412 6399892 1752780 10.06 3102204 4401528 644 18:34:29 18:23:01 4788720 11587204 4356724 26.60 267892 6400216 5117416 29.36 6548416 4389484 560 18:34:29 18:24:01 4781044 11579752 4364124 26.64 267900 6400440 5117416 29.36 6556704 4389504 48 18:34:29 18:25:01 4775688 11574588 4369360 26.67 267900 6400632 5133444 29.45 6561056 4389680 120 18:34:29 18:26:01 4759316 11558544 4385384 26.77 267912 6400944 5165440 29.64 6576984 4389996 148 18:34:29 18:27:01 4753520 11552900 4391024 26.81 267912 6401092 5165440 29.64 6582392 4390148 304 18:34:29 18:28:01 4710964 11510932 4433112 27.06 267924 6401668 5181432 29.73 6623972 4390676 320 18:34:29 18:29:01 4696600 11496880 4447152 27.15 267948 6401960 5181432 29.73 6636596 4390964 612 18:34:29 18:30:01 4275400 11076524 4867332 29.71 268940 6401748 5782232 33.17 7069040 4379288 548 18:34:29 18:31:01 4055372 10857120 5086392 31.05 268952 6402360 5912660 33.92 7286724 4379656 120 18:34:29 18:32:01 3940964 10743888 5199492 31.74 268964 6403544 5960724 34.20 7399592 4380756 116 18:34:29 18:33:01 3888664 10692168 5251192 32.06 268976 6404120 5960724 34.20 7451752 4381324 468 18:34:29 18:34:01 8416180 15453312 492848 3.01 274320 6620176 1243940 7.14 2743276 4569288 181352 18:34:29 Average: 4293084 11199438 4746689 28.98 242803 6547320 5767533 33.09 6742786 4705516 71827 18:34:29 18:34:29 17:45:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 18:34:29 17:46:02 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 17:46:02 lo 1.83 1.83 0.17 0.17 0.00 0.00 0.00 0.00 18:34:29 17:46:02 ens3 410.48 266.01 1621.23 71.25 0.00 0.00 0.00 0.00 18:34:29 17:47:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 17:47:01 lo 1.39 1.39 0.14 0.14 0.00 0.00 0.00 0.00 18:34:29 17:47:01 ens3 109.78 89.27 1310.28 12.96 0.00 0.00 0.00 0.00 18:34:29 17:48:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 17:48:01 lo 5.37 5.37 0.55 0.55 0.00 0.00 0.00 0.00 18:34:29 17:48:01 ens3 454.01 376.50 7182.00 39.15 0.00 0.00 0.00 0.00 18:34:29 17:49:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 17:49:01 lo 1.13 1.13 0.11 0.11 0.00 0.00 0.00 0.00 18:34:29 17:49:01 ens3 323.78 241.06 5470.14 25.57 0.00 0.00 0.00 0.00 18:34:29 17:50:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 17:50:01 lo 1.60 1.60 0.14 0.14 0.00 0.00 0.00 0.00 18:34:29 17:50:01 ens3 61.31 34.24 1753.15 4.31 0.00 0.00 0.00 0.00 18:34:29 17:51:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 17:51:01 lo 6.23 6.23 1.50 1.50 0.00 0.00 0.00 0.00 18:34:29 17:51:01 ens3 144.86 77.07 2067.04 5.75 0.00 0.00 0.00 0.00 18:34:29 17:52:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 17:52:01 lo 36.12 36.12 42.46 42.46 0.00 0.00 0.00 0.00 18:34:29 17:52:01 ens3 1.83 1.68 0.32 0.30 0.00 0.00 0.00 0.00 18:34:29 17:53:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 17:53:01 lo 28.81 28.81 16.61 16.61 0.00 0.00 0.00 0.00 18:34:29 17:53:01 ens3 1.02 0.85 0.18 0.16 0.00 0.00 0.00 0.00 18:34:29 17:54:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 17:54:01 lo 29.15 29.15 12.05 12.05 0.00 0.00 0.00 0.00 18:34:29 17:54:01 ens3 2.12 2.18 0.85 0.75 0.00 0.00 0.00 0.00 18:34:29 17:55:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 17:55:01 lo 8.61 8.61 8.35 8.35 0.00 0.00 0.00 0.00 18:34:29 17:55:01 ens3 0.27 0.22 0.03 0.02 0.00 0.00 0.00 0.00 18:34:29 17:56:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 17:56:01 lo 0.88 0.88 0.09 0.09 0.00 0.00 0.00 0.00 18:34:29 17:56:01 ens3 0.43 0.20 0.15 0.07 0.00 0.00 0.00 0.00 18:34:29 17:57:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 17:57:01 lo 3.85 3.85 0.40 0.40 0.00 0.00 0.00 0.00 18:34:29 17:57:01 ens3 0.45 0.35 0.07 0.06 0.00 0.00 0.00 0.00 18:34:29 17:58:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 17:58:01 lo 18.97 18.97 17.18 17.18 0.00 0.00 0.00 0.00 18:34:29 17:58:01 ens3 1.05 0.87 0.16 0.14 0.00 0.00 0.00 0.00 18:34:29 17:59:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 17:59:01 lo 28.56 28.56 14.86 14.86 0.00 0.00 0.00 0.00 18:34:29 17:59:01 ens3 1.67 1.07 0.28 0.21 0.00 0.00 0.00 0.00 18:34:29 18:00:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:00:01 lo 17.00 17.00 7.37 7.37 0.00 0.00 0.00 0.00 18:34:29 18:00:01 ens3 1.50 0.53 0.44 0.28 0.00 0.00 0.00 0.00 18:34:29 18:01:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:01:01 lo 22.13 22.13 7.10 7.10 0.00 0.00 0.00 0.00 18:34:29 18:01:01 ens3 1.35 0.80 0.72 0.52 0.00 0.00 0.00 0.00 18:34:29 18:02:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:02:01 lo 20.75 20.75 18.63 18.63 0.00 0.00 0.00 0.00 18:34:29 18:02:01 ens3 2.52 2.63 0.94 0.87 0.00 0.00 0.00 0.00 18:34:29 18:03:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:03:01 lo 25.53 25.53 15.50 15.50 0.00 0.00 0.00 0.00 18:34:29 18:03:01 ens3 1.17 1.12 0.21 0.20 0.00 0.00 0.00 0.00 18:34:29 18:04:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:04:01 lo 26.48 26.48 8.53 8.53 0.00 0.00 0.00 0.00 18:34:29 18:04:01 ens3 1.38 1.13 0.25 0.24 0.00 0.00 0.00 0.00 18:34:29 18:05:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:05:01 lo 14.43 14.43 5.93 5.93 0.00 0.00 0.00 0.00 18:34:29 18:05:01 ens3 0.87 0.77 0.15 0.14 0.00 0.00 0.00 0.00 18:34:29 18:06:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:06:01 lo 40.03 40.03 18.36 18.36 0.00 0.00 0.00 0.00 18:34:29 18:06:01 ens3 1.57 1.33 0.39 0.32 0.00 0.00 0.00 0.00 18:34:29 18:07:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:07:01 lo 21.51 21.51 6.58 6.58 0.00 0.00 0.00 0.00 18:34:29 18:07:01 ens3 1.05 0.92 0.20 0.18 0.00 0.00 0.00 0.00 18:34:29 18:08:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:08:01 lo 27.20 27.20 11.63 11.63 0.00 0.00 0.00 0.00 18:34:29 18:08:01 ens3 2.60 2.92 1.28 1.13 0.00 0.00 0.00 0.00 18:34:29 18:09:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:09:01 lo 12.26 12.26 4.40 4.40 0.00 0.00 0.00 0.00 18:34:29 18:09:01 ens3 1.82 1.80 0.49 0.44 0.00 0.00 0.00 0.00 18:34:29 18:10:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:10:01 lo 18.65 18.65 14.43 14.43 0.00 0.00 0.00 0.00 18:34:29 18:10:01 ens3 1.47 1.35 0.28 0.26 0.00 0.00 0.00 0.00 18:34:29 18:11:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:11:01 lo 15.66 15.66 5.37 5.37 0.00 0.00 0.00 0.00 18:34:29 18:11:01 ens3 6.55 5.68 1.46 3.56 0.00 0.00 0.00 0.00 18:34:29 18:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:12:01 lo 21.84 21.84 10.41 10.41 0.00 0.00 0.00 0.00 18:34:29 18:12:01 ens3 1.07 1.03 0.22 0.21 0.00 0.00 0.00 0.00 18:34:29 18:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:13:01 lo 10.18 10.18 5.24 5.24 0.00 0.00 0.00 0.00 18:34:29 18:13:01 ens3 1.05 0.92 0.21 0.24 0.00 0.00 0.00 0.00 18:34:29 18:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:14:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 18:34:29 18:14:01 ens3 0.17 0.08 0.01 0.01 0.00 0.00 0.00 0.00 18:34:29 18:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:15:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:15:01 ens3 1.33 0.28 0.36 0.21 0.00 0.00 0.00 0.00 18:34:29 18:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:16:01 lo 0.73 0.73 0.04 0.04 0.00 0.00 0.00 0.00 18:34:29 18:16:01 ens3 1.33 0.92 0.53 0.38 0.00 0.00 0.00 0.00 18:34:29 18:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:17:01 lo 1.95 1.95 0.17 0.17 0.00 0.00 0.00 0.00 18:34:29 18:17:01 ens3 21.11 17.91 4.77 15.12 0.00 0.00 0.00 0.00 18:34:29 18:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:18:01 lo 44.06 44.06 37.45 37.45 0.00 0.00 0.00 0.00 18:34:29 18:18:01 ens3 2.62 1.07 0.63 0.40 0.00 0.00 0.00 0.00 18:34:29 18:19:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:19:01 lo 27.45 27.45 10.26 10.26 0.00 0.00 0.00 0.00 18:34:29 18:19:01 ens3 2.52 1.43 1.27 0.95 0.00 0.00 0.00 0.00 18:34:29 18:20:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:20:01 lo 0.47 0.47 0.03 0.03 0.00 0.00 0.00 0.00 18:34:29 18:20:01 ens3 0.30 0.07 0.02 0.01 0.00 0.00 0.00 0.00 18:34:29 18:21:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:21:01 lo 0.43 0.43 0.05 0.05 0.00 0.00 0.00 0.00 18:34:29 18:21:01 ens3 0.38 0.15 0.14 0.07 0.00 0.00 0.00 0.00 18:34:29 18:22:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:22:01 lo 1.78 1.78 0.15 0.15 0.00 0.00 0.00 0.00 18:34:29 18:22:01 ens3 0.50 0.32 0.06 0.05 0.00 0.00 0.00 0.00 18:34:29 18:23:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:23:01 lo 31.64 31.64 17.06 17.06 0.00 0.00 0.00 0.00 18:34:29 18:23:01 ens3 0.83 0.65 0.12 0.11 0.00 0.00 0.00 0.00 18:34:29 18:24:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:24:01 lo 13.55 13.55 5.26 5.26 0.00 0.00 0.00 0.00 18:34:29 18:24:01 ens3 1.02 1.00 0.17 0.18 0.00 0.00 0.00 0.00 18:34:29 18:25:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:25:01 lo 17.53 17.53 7.22 7.22 0.00 0.00 0.00 0.00 18:34:29 18:25:01 ens3 1.27 1.15 0.43 0.35 0.00 0.00 0.00 0.00 18:34:29 18:26:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:26:01 lo 16.13 16.13 6.33 6.33 0.00 0.00 0.00 0.00 18:34:29 18:26:01 ens3 0.80 0.78 0.22 0.16 0.00 0.00 0.00 0.00 18:34:29 18:27:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:27:01 lo 17.80 17.80 6.82 6.82 0.00 0.00 0.00 0.00 18:34:29 18:27:01 ens3 0.77 0.95 0.14 0.15 0.00 0.00 0.00 0.00 18:34:29 18:28:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:28:01 lo 41.71 41.71 14.24 14.24 0.00 0.00 0.00 0.00 18:34:29 18:28:01 ens3 0.77 0.68 0.11 0.10 0.00 0.00 0.00 0.00 18:34:29 18:29:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:29:01 lo 36.73 36.73 12.10 12.10 0.00 0.00 0.00 0.00 18:34:29 18:29:01 ens3 0.90 1.13 0.18 0.18 0.00 0.00 0.00 0.00 18:34:29 18:30:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:30:01 lo 17.13 17.13 11.57 11.57 0.00 0.00 0.00 0.00 18:34:29 18:30:01 ens3 0.85 0.88 0.13 0.12 0.00 0.00 0.00 0.00 18:34:29 18:31:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:31:01 lo 32.32 32.32 11.21 11.21 0.00 0.00 0.00 0.00 18:34:29 18:31:01 ens3 1.05 0.98 0.31 0.23 0.00 0.00 0.00 0.00 18:34:29 18:32:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:32:01 lo 92.07 92.07 33.36 33.36 0.00 0.00 0.00 0.00 18:34:29 18:32:01 ens3 1.40 0.70 0.40 0.29 0.00 0.00 0.00 0.00 18:34:29 18:33:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:33:01 lo 74.19 74.19 23.87 23.87 0.00 0.00 0.00 0.00 18:34:29 18:33:01 ens3 1.27 0.68 0.43 0.32 0.00 0.00 0.00 0.00 18:34:29 18:34:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 18:34:01 lo 3.30 3.30 1.09 1.09 0.00 0.00 0.00 0.00 18:34:29 18:34:01 ens3 154.64 130.13 1957.93 26.26 0.00 0.00 0.00 0.00 18:34:29 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:29 Average: lo 19.13 19.13 9.24 9.24 0.00 0.00 0.00 0.00 18:34:29 Average: ens3 35.38 26.03 436.06 4.38 0.00 0.00 0.00 0.00 18:34:29 18:34:29 18:34:29 ---> sar -P ALL: 18:34:29 Linux 5.4.0-190-generic (prd-ubuntu2004-docker-4c-16g-43174) 10/16/24 _x86_64_ (4 CPU) 18:34:29 18:34:29 17:44:58 LINUX RESTART (4 CPU) 18:34:29 18:34:29 17:45:01 CPU %user %nice %system %iowait %steal %idle 18:34:29 17:46:02 all 15.26 17.42 14.28 6.79 0.11 46.13 18:34:29 17:46:02 0 18.27 17.22 14.02 7.50 0.12 42.87 18:34:29 17:46:02 1 12.42 18.75 14.84 3.59 0.12 50.28 18:34:29 17:46:02 2 20.30 15.74 12.83 5.51 0.12 45.50 18:34:29 17:46:02 3 10.05 17.98 15.42 10.56 0.10 45.88 18:34:29 17:47:01 all 24.24 4.50 5.72 4.01 0.09 61.44 18:34:29 17:47:01 0 16.05 4.62 5.20 1.70 0.09 72.35 18:34:29 17:47:01 1 30.43 3.29 5.56 2.42 0.09 58.21 18:34:29 17:47:01 2 13.01 4.92 5.75 9.10 0.09 67.14 18:34:29 17:47:01 3 37.51 5.19 6.39 2.82 0.09 48.01 18:34:29 17:48:01 all 84.07 0.00 4.09 4.33 0.12 7.38 18:34:29 17:48:01 0 82.63 0.00 4.16 6.46 0.12 6.63 18:34:29 17:48:01 1 84.01 0.00 3.51 3.46 0.10 8.91 18:34:29 17:48:01 2 86.48 0.00 3.81 3.43 0.14 6.15 18:34:29 17:48:01 3 83.18 0.00 4.90 3.95 0.14 7.83 18:34:29 17:49:01 all 59.62 0.00 3.35 1.02 0.11 35.90 18:34:29 17:49:01 0 57.62 0.00 2.45 0.55 0.12 39.26 18:34:29 17:49:01 1 48.55 0.00 3.37 1.67 0.10 46.31 18:34:29 17:49:01 2 75.12 0.00 4.31 1.43 0.13 19.00 18:34:29 17:49:01 3 57.20 0.00 3.29 0.43 0.10 38.97 18:34:29 17:50:01 all 85.50 0.00 4.15 7.64 0.13 2.59 18:34:29 17:50:01 0 84.44 0.00 3.50 9.46 0.14 2.47 18:34:29 17:50:01 1 82.06 0.00 4.40 12.55 0.14 0.85 18:34:29 17:50:01 2 86.58 0.00 3.99 5.43 0.12 3.89 18:34:29 17:50:01 3 88.88 0.00 4.69 3.17 0.12 3.14 18:34:29 17:51:01 all 78.01 0.00 3.60 6.28 0.13 11.98 18:34:29 17:51:01 0 76.20 0.00 4.19 5.32 0.14 14.15 18:34:29 17:51:01 1 77.05 0.00 3.59 4.89 0.13 14.32 18:34:29 17:51:01 2 78.66 0.00 3.49 6.33 0.12 11.41 18:34:29 17:51:01 3 80.10 0.00 3.12 8.59 0.13 8.05 18:34:29 17:52:01 all 51.66 0.00 2.01 0.28 0.10 45.97 18:34:29 17:52:01 0 52.25 0.00 2.14 0.35 0.08 45.18 18:34:29 17:52:01 1 47.81 0.00 1.68 0.17 0.10 50.24 18:34:29 17:52:01 2 55.18 0.00 1.71 0.55 0.10 42.46 18:34:29 17:52:01 3 51.37 0.00 2.50 0.03 0.10 46.00 18:34:29 17:53:01 all 36.73 0.00 1.34 0.50 0.10 61.32 18:34:29 17:53:01 0 38.02 0.00 1.27 1.02 0.10 59.60 18:34:29 17:53:01 1 34.38 0.00 1.34 0.07 0.12 64.10 18:34:29 17:53:01 2 37.11 0.00 1.47 0.47 0.10 60.85 18:34:29 17:53:01 3 37.42 0.00 1.29 0.45 0.10 60.73 18:34:29 17:54:01 all 30.89 0.00 1.24 0.51 0.09 67.28 18:34:29 17:54:01 0 35.14 0.00 1.31 0.12 0.10 63.33 18:34:29 17:54:01 1 30.60 0.00 1.59 0.12 0.10 67.59 18:34:29 17:54:01 2 29.18 0.00 1.17 1.79 0.08 67.77 18:34:29 17:54:01 3 28.66 0.00 0.88 0.00 0.07 70.39 18:34:29 17:55:01 all 28.88 0.00 1.44 1.28 0.09 68.30 18:34:29 17:55:01 0 30.90 0.00 1.37 3.09 0.08 64.56 18:34:29 17:55:01 1 33.21 0.00 0.97 0.12 0.10 65.61 18:34:29 17:55:01 2 25.43 0.00 1.41 1.26 0.08 71.82 18:34:29 17:55:01 3 25.99 0.00 2.00 0.67 0.10 71.24 18:34:29 17:56:01 all 1.59 0.00 0.40 0.02 0.06 97.93 18:34:29 17:56:01 0 0.77 0.00 0.33 0.05 0.07 98.78 18:34:29 17:56:01 1 1.68 0.00 0.33 0.00 0.05 97.93 18:34:29 17:56:01 2 2.70 0.00 0.36 0.05 0.07 96.82 18:34:29 17:56:01 3 1.18 0.00 0.55 0.00 0.05 98.22 18:34:29 18:34:29 17:56:01 CPU %user %nice %system %iowait %steal %idle 18:34:29 17:57:01 all 39.69 0.00 1.72 0.06 0.09 58.44 18:34:29 17:57:01 0 42.13 0.00 1.51 0.00 0.08 56.28 18:34:29 17:57:01 1 36.86 0.00 1.73 0.08 0.08 61.24 18:34:29 17:57:01 2 41.14 0.00 1.79 0.08 0.08 56.90 18:34:29 17:57:01 3 38.66 0.00 1.85 0.07 0.10 59.32 18:34:29 17:58:01 all 35.14 0.00 1.36 0.27 0.10 63.13 18:34:29 17:58:01 0 36.72 0.00 1.93 0.03 0.10 61.22 18:34:29 17:58:01 1 35.38 0.00 1.15 0.13 0.12 63.22 18:34:29 17:58:01 2 33.98 0.00 1.07 0.63 0.12 64.20 18:34:29 17:58:01 3 34.46 0.00 1.29 0.27 0.08 63.90 18:34:29 17:59:01 all 39.47 0.00 1.43 0.29 0.09 58.72 18:34:29 17:59:01 0 36.53 0.00 1.60 0.38 0.10 61.38 18:34:29 17:59:01 1 40.03 0.00 1.05 0.02 0.08 58.82 18:34:29 17:59:01 2 40.89 0.00 1.40 0.38 0.08 57.23 18:34:29 17:59:01 3 40.42 0.00 1.65 0.37 0.10 57.45 18:34:29 18:00:01 all 3.78 0.00 0.37 0.01 0.09 95.74 18:34:29 18:00:01 0 4.03 0.00 0.44 0.00 0.10 95.43 18:34:29 18:00:01 1 3.58 0.00 0.35 0.02 0.08 95.97 18:34:29 18:00:01 2 3.90 0.00 0.39 0.00 0.10 95.61 18:34:29 18:00:01 3 3.62 0.00 0.30 0.03 0.08 95.96 18:34:29 18:01:01 all 3.69 0.00 0.47 0.02 0.07 95.75 18:34:29 18:01:01 0 3.38 0.00 0.42 0.00 0.07 96.14 18:34:29 18:01:01 1 4.36 0.00 0.37 0.02 0.08 95.17 18:34:29 18:01:01 2 3.74 0.00 0.62 0.00 0.07 95.57 18:34:29 18:01:01 3 3.28 0.00 0.48 0.05 0.07 96.12 18:34:29 18:02:01 all 53.09 0.00 1.94 1.12 0.12 43.73 18:34:29 18:02:01 0 53.76 0.00 1.53 0.67 0.10 43.94 18:34:29 18:02:01 1 47.29 0.00 1.85 2.34 0.12 48.40 18:34:29 18:02:01 2 60.11 0.00 2.43 1.00 0.13 36.32 18:34:29 18:02:01 3 51.19 0.00 1.96 0.47 0.12 46.26 18:34:29 18:03:01 all 54.71 0.00 1.73 0.09 0.11 43.37 18:34:29 18:03:01 0 51.97 0.00 1.43 0.02 0.10 46.49 18:34:29 18:03:01 1 55.33 0.00 1.83 0.29 0.12 42.43 18:34:29 18:03:01 2 55.88 0.00 1.59 0.05 0.12 42.35 18:34:29 18:03:01 3 55.64 0.00 2.06 0.00 0.10 42.20 18:34:29 18:04:01 all 3.27 0.00 0.29 0.05 0.08 96.31 18:34:29 18:04:01 0 2.71 0.00 0.24 0.00 0.08 96.97 18:34:29 18:04:01 1 3.14 0.00 0.27 0.17 0.08 96.34 18:34:29 18:04:01 2 4.06 0.00 0.23 0.00 0.08 95.62 18:34:29 18:04:01 3 3.15 0.00 0.42 0.02 0.08 96.33 18:34:29 18:05:01 all 32.84 0.00 1.29 0.31 0.10 65.46 18:34:29 18:05:01 0 34.43 0.00 1.16 0.10 0.10 64.21 18:34:29 18:05:01 1 33.13 0.00 1.41 0.20 0.10 65.15 18:34:29 18:05:01 2 31.68 0.00 1.48 0.15 0.10 66.59 18:34:29 18:05:01 3 32.12 0.00 1.11 0.77 0.12 65.88 18:34:29 18:06:01 all 11.42 0.00 0.44 0.01 0.10 88.04 18:34:29 18:06:01 0 10.80 0.00 0.39 0.00 0.08 88.73 18:34:29 18:06:01 1 12.05 0.00 0.54 0.02 0.10 87.30 18:34:29 18:06:01 2 11.22 0.00 0.47 0.00 0.10 88.21 18:34:29 18:06:01 3 11.58 0.00 0.35 0.03 0.10 87.93 18:34:29 18:07:01 all 22.61 0.00 0.84 0.06 0.09 76.40 18:34:29 18:07:01 0 23.08 0.00 0.74 0.00 0.10 76.08 18:34:29 18:07:01 1 21.16 0.00 0.99 0.03 0.08 77.73 18:34:29 18:07:01 2 23.09 0.00 0.89 0.17 0.08 75.76 18:34:29 18:07:01 3 23.10 0.00 0.76 0.03 0.10 76.01 18:34:29 18:34:29 18:07:01 CPU %user %nice %system %iowait %steal %idle 18:34:29 18:08:01 all 14.23 0.00 0.97 0.35 0.09 84.37 18:34:29 18:08:01 0 15.21 0.00 1.03 0.03 0.08 83.64 18:34:29 18:08:01 1 13.41 0.00 0.87 0.02 0.10 85.60 18:34:29 18:08:01 2 15.55 0.00 0.95 0.50 0.08 82.91 18:34:29 18:08:01 3 12.73 0.00 1.02 0.83 0.08 85.33 18:34:29 18:09:01 all 75.83 0.00 2.64 0.21 0.12 21.20 18:34:29 18:09:01 0 78.90 0.00 2.43 0.03 0.12 18.52 18:34:29 18:09:01 1 74.37 0.00 2.47 0.27 0.12 22.77 18:34:29 18:09:01 2 76.02 0.00 2.82 0.25 0.13 20.77 18:34:29 18:09:01 3 74.02 0.00 2.83 0.30 0.10 22.74 18:34:29 18:10:01 all 37.24 0.00 1.21 0.29 0.11 61.15 18:34:29 18:10:01 0 37.39 0.00 1.34 0.15 0.10 61.03 18:34:29 18:10:01 1 37.72 0.00 1.16 0.72 0.12 60.28 18:34:29 18:10:01 2 35.05 0.00 1.24 0.25 0.12 63.34 18:34:29 18:10:01 3 38.80 0.00 1.11 0.05 0.10 59.94 18:34:29 18:11:01 all 50.99 0.00 1.91 0.37 0.10 46.63 18:34:29 18:11:01 0 51.67 0.00 2.46 0.02 0.10 45.75 18:34:29 18:11:01 1 51.91 0.00 1.70 1.33 0.10 44.96 18:34:29 18:11:01 2 50.38 0.00 1.58 0.15 0.10 47.79 18:34:29 18:11:01 3 49.98 0.00 1.88 0.00 0.10 48.03 18:34:29 18:12:01 all 6.15 0.00 0.38 0.01 0.10 93.36 18:34:29 18:12:01 0 6.16 0.00 0.45 0.00 0.10 93.29 18:34:29 18:12:01 1 6.37 0.00 0.39 0.03 0.10 93.11 18:34:29 18:12:01 2 7.01 0.00 0.27 0.00 0.10 92.62 18:34:29 18:12:01 3 5.07 0.00 0.40 0.00 0.10 94.42 18:34:29 18:13:01 all 3.78 0.00 0.37 0.00 0.08 95.77 18:34:29 18:13:01 0 3.43 0.00 0.20 0.00 0.05 96.31 18:34:29 18:13:01 1 5.08 0.00 0.37 0.02 0.08 94.45 18:34:29 18:13:01 2 3.44 0.00 0.50 0.00 0.08 95.97 18:34:29 18:13:01 3 3.18 0.00 0.40 0.00 0.08 96.33 18:34:29 18:14:01 all 0.18 0.00 0.10 0.01 0.08 99.63 18:34:29 18:14:01 0 0.13 0.00 0.07 0.00 0.07 99.73 18:34:29 18:14:01 1 0.05 0.00 0.03 0.05 0.05 99.82 18:34:29 18:14:01 2 0.30 0.00 0.15 0.00 0.10 99.45 18:34:29 18:14:01 3 0.25 0.00 0.15 0.00 0.08 99.51 18:34:29 18:15:01 all 0.25 0.00 0.10 0.00 0.07 99.58 18:34:29 18:15:01 0 0.20 0.00 0.07 0.00 0.03 99.70 18:34:29 18:15:01 1 0.32 0.00 0.10 0.02 0.10 99.46 18:34:29 18:15:01 2 0.25 0.00 0.13 0.00 0.05 99.57 18:34:29 18:15:01 3 0.22 0.00 0.10 0.00 0.10 99.58 18:34:29 18:16:01 all 1.58 0.00 0.15 0.01 0.08 98.19 18:34:29 18:16:01 0 0.75 0.00 0.18 0.00 0.08 98.98 18:34:29 18:16:01 1 5.09 0.00 0.15 0.03 0.08 94.64 18:34:29 18:16:01 2 0.23 0.00 0.15 0.00 0.07 99.55 18:34:29 18:16:01 3 0.27 0.00 0.10 0.00 0.07 99.57 18:34:29 18:17:01 all 42.08 0.00 1.63 0.44 0.09 55.77 18:34:29 18:17:01 0 43.38 0.00 1.57 0.08 0.10 54.87 18:34:29 18:17:01 1 42.26 0.00 2.36 1.56 0.08 53.74 18:34:29 18:17:01 2 40.92 0.00 1.42 0.13 0.07 57.46 18:34:29 18:17:01 3 41.75 0.00 1.16 0.00 0.10 56.99 18:34:29 18:18:01 all 21.69 0.00 0.62 0.57 0.09 77.03 18:34:29 18:18:01 0 21.63 0.00 0.69 0.47 0.08 77.13 18:34:29 18:18:01 1 22.15 0.00 0.69 1.75 0.10 75.31 18:34:29 18:18:01 2 23.19 0.00 0.45 0.00 0.08 76.27 18:34:29 18:18:01 3 19.80 0.00 0.66 0.07 0.08 79.39 18:34:29 18:34:29 18:18:01 CPU %user %nice %system %iowait %steal %idle 18:34:29 18:19:01 all 3.82 0.00 0.18 0.02 0.08 95.90 18:34:29 18:19:01 0 4.16 0.00 0.23 0.03 0.08 95.49 18:34:29 18:19:01 1 3.94 0.00 0.22 0.03 0.08 95.72 18:34:29 18:19:01 2 3.53 0.00 0.17 0.00 0.08 96.22 18:34:29 18:19:01 3 3.66 0.00 0.12 0.00 0.07 96.15 18:34:29 18:20:01 all 0.23 0.00 0.09 0.01 0.07 99.61 18:34:29 18:20:01 0 0.13 0.00 0.05 0.02 0.07 99.73 18:34:29 18:20:01 1 0.25 0.00 0.10 0.02 0.08 99.55 18:34:29 18:20:01 2 0.25 0.00 0.08 0.00 0.07 99.60 18:34:29 18:20:01 3 0.27 0.00 0.12 0.00 0.07 99.55 18:34:29 18:21:01 all 0.26 0.00 0.10 0.02 0.07 99.55 18:34:29 18:21:01 0 0.23 0.00 0.10 0.07 0.05 99.55 18:34:29 18:21:01 1 0.30 0.00 0.05 0.02 0.05 99.58 18:34:29 18:21:01 2 0.29 0.00 0.10 0.00 0.08 99.53 18:34:29 18:21:01 3 0.20 0.00 0.17 0.00 0.10 99.53 18:34:29 18:22:01 all 6.98 0.00 0.55 0.07 0.07 92.33 18:34:29 18:22:01 0 6.67 0.00 0.50 0.07 0.07 92.69 18:34:29 18:22:01 1 7.19 0.00 0.50 0.07 0.07 92.17 18:34:29 18:22:01 2 6.65 0.00 0.73 0.05 0.07 92.50 18:34:29 18:22:01 3 7.41 0.00 0.49 0.08 0.07 91.95 18:34:29 18:23:01 all 50.76 0.00 1.54 0.33 0.11 47.26 18:34:29 18:23:01 0 52.57 0.00 1.69 1.29 0.10 44.35 18:34:29 18:23:01 1 48.07 0.00 1.38 0.00 0.12 50.43 18:34:29 18:23:01 2 51.79 0.00 1.89 0.02 0.12 46.19 18:34:29 18:23:01 3 50.63 0.00 1.20 0.02 0.10 48.05 18:34:29 18:24:01 all 4.26 0.00 0.32 0.01 0.07 95.34 18:34:29 18:24:01 0 4.19 0.00 0.25 0.03 0.08 95.44 18:34:29 18:24:01 1 4.28 0.00 0.32 0.02 0.07 95.31 18:34:29 18:24:01 2 4.13 0.00 0.40 0.00 0.07 95.41 18:34:29 18:24:01 3 4.43 0.00 0.32 0.00 0.07 95.18 18:34:29 18:25:01 all 3.04 0.00 0.30 0.00 0.06 96.60 18:34:29 18:25:01 0 3.00 0.00 0.40 0.02 0.07 96.51 18:34:29 18:25:01 1 2.68 0.00 0.32 0.00 0.07 96.94 18:34:29 18:25:01 2 3.07 0.00 0.20 0.00 0.05 96.68 18:34:29 18:25:01 3 3.39 0.00 0.27 0.00 0.07 96.27 18:34:29 18:26:01 all 2.80 0.00 0.28 0.01 0.06 96.85 18:34:29 18:26:01 0 4.48 0.00 0.38 0.02 0.08 95.05 18:34:29 18:26:01 1 2.44 0.00 0.35 0.02 0.05 97.15 18:34:29 18:26:01 2 2.15 0.00 0.28 0.00 0.07 97.50 18:34:29 18:26:01 3 2.11 0.00 0.10 0.00 0.05 97.74 18:34:29 18:27:01 all 2.31 0.00 0.25 0.01 0.06 97.37 18:34:29 18:27:01 0 2.82 0.00 0.20 0.02 0.05 96.91 18:34:29 18:27:01 1 2.11 0.00 0.20 0.02 0.07 97.61 18:34:29 18:27:01 2 2.22 0.00 0.40 0.00 0.07 97.31 18:34:29 18:27:01 3 2.07 0.00 0.22 0.00 0.07 97.64 18:34:29 18:28:01 all 4.01 0.00 0.37 0.00 0.07 95.55 18:34:29 18:28:01 0 3.81 0.00 0.32 0.02 0.07 95.79 18:34:29 18:28:01 1 4.09 0.00 0.37 0.00 0.07 95.48 18:34:29 18:28:01 2 3.81 0.00 0.30 0.00 0.08 95.81 18:34:29 18:28:01 3 4.32 0.00 0.49 0.00 0.07 95.12 18:34:29 18:29:01 all 2.69 0.00 0.29 0.02 0.07 96.93 18:34:29 18:29:01 0 2.94 0.00 0.28 0.07 0.07 96.64 18:34:29 18:29:01 1 2.68 0.00 0.25 0.02 0.07 96.99 18:34:29 18:29:01 2 2.84 0.00 0.35 0.00 0.08 96.73 18:34:29 18:29:01 3 2.31 0.00 0.28 0.00 0.05 97.35 18:34:29 18:34:29 18:29:01 CPU %user %nice %system %iowait %steal %idle 18:34:29 18:30:01 all 54.64 0.00 1.80 0.26 0.11 43.18 18:34:29 18:30:01 0 55.05 0.00 2.48 0.62 0.10 41.74 18:34:29 18:30:01 1 56.69 0.00 1.46 0.07 0.12 41.66 18:34:29 18:30:01 2 53.64 0.00 1.46 0.17 0.10 44.62 18:34:29 18:30:01 3 53.17 0.00 1.81 0.20 0.12 44.70 18:34:29 18:31:01 all 11.14 0.00 0.37 0.06 0.07 88.36 18:34:29 18:31:01 0 9.93 0.00 0.34 0.07 0.07 89.60 18:34:29 18:31:01 1 10.18 0.00 0.35 0.00 0.07 89.40 18:34:29 18:31:01 2 11.86 0.00 0.45 0.18 0.08 87.43 18:34:29 18:31:01 3 12.57 0.00 0.35 0.00 0.07 87.01 18:34:29 18:32:01 all 11.24 0.00 0.46 0.03 0.08 88.20 18:34:29 18:32:01 0 10.94 0.00 0.47 0.10 0.08 88.41 18:34:29 18:32:01 1 11.81 0.00 0.43 0.00 0.07 87.69 18:34:29 18:32:01 2 11.15 0.00 0.43 0.00 0.08 88.33 18:34:29 18:32:01 3 11.07 0.00 0.48 0.00 0.08 88.36 18:34:29 18:33:01 all 4.98 0.00 0.32 0.01 0.06 94.63 18:34:29 18:33:01 0 5.04 0.00 0.35 0.03 0.07 94.50 18:34:29 18:33:01 1 5.36 0.00 0.23 0.00 0.07 94.34 18:34:29 18:33:01 2 4.73 0.00 0.35 0.00 0.05 94.87 18:34:29 18:33:01 3 4.78 0.00 0.34 0.00 0.07 94.82 18:34:29 18:34:01 all 21.17 0.00 1.26 0.51 0.08 76.97 18:34:29 18:34:01 0 16.45 0.00 1.39 0.95 0.08 81.13 18:34:29 18:34:01 1 17.04 0.00 1.41 0.52 0.08 80.95 18:34:29 18:34:01 2 13.59 0.00 0.99 0.30 0.08 85.04 18:34:29 18:34:01 3 37.59 0.00 1.27 0.27 0.08 60.79 18:34:29 Average: all 25.15 0.44 1.46 0.78 0.09 72.07 18:34:29 Average: 0 25.14 0.44 1.45 0.83 0.09 72.05 18:34:29 Average: 1 24.70 0.44 1.44 0.79 0.09 72.53 18:34:29 Average: 2 25.38 0.42 1.44 0.80 0.09 71.87 18:34:29 Average: 3 25.38 0.47 1.53 0.70 0.09 71.84 18:34:29 18:34:29 18:34:29