15:28:45 Triggered by Gerrit: https://git.opendaylight.org/gerrit/c/transportpce/+/119740 15:28:45 Running as SYSTEM 15:28:45 [EnvInject] - Loading node environment variables. 15:28:45 Building remotely on prd-ubuntu2204-docker-4c-16g-12958 (ubuntu2204-docker-4c-16g) in workspace /w/workspace/transportpce-tox-verify-transportpce-master 15:28:45 [ssh-agent] Looking for ssh-agent implementation... 15:28:45 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 15:28:45 $ ssh-agent 15:28:45 SSH_AUTH_SOCK=/tmp/ssh-XXXXXXpaFs1K/agent.1567 15:28:45 SSH_AGENT_PID=1569 15:28:45 [ssh-agent] Started. 15:28:45 Running ssh-add (command line suppressed) 15:28:45 Identity added: /w/workspace/transportpce-tox-verify-transportpce-master@tmp/private_key_1218469126531306993.key (/w/workspace/transportpce-tox-verify-transportpce-master@tmp/private_key_1218469126531306993.key) 15:28:45 [ssh-agent] Using credentials jenkins (jenkins-ssh) 15:28:45 The recommended git tool is: NONE 15:28:48 using credential jenkins-ssh 15:28:48 Wiping out workspace first. 15:28:48 Cloning the remote Git repository 15:28:48 Cloning repository git://devvexx.opendaylight.org/mirror/transportpce 15:28:48 > git init /w/workspace/transportpce-tox-verify-transportpce-master # timeout=10 15:28:48 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/transportpce 15:28:48 > git --version # timeout=10 15:28:48 > git --version # 'git version 2.34.1' 15:28:48 using GIT_SSH to set credentials jenkins-ssh 15:28:48 Verifying host key using known hosts file 15:28:48 You're using 'Known hosts file' strategy to verify ssh host keys, but your known_hosts file does not exist, please go to 'Manage Jenkins' -> 'Security' -> 'Git Host Key Verification Configuration' and configure host key verification. 15:28:48 > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/transportpce +refs/heads/*:refs/remotes/origin/* # timeout=10 15:28:51 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/transportpce # timeout=10 15:28:51 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 15:28:52 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/transportpce # timeout=10 15:28:52 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/transportpce 15:28:52 using GIT_SSH to set credentials jenkins-ssh 15:28:52 Verifying host key using known hosts file 15:28:52 You're using 'Known hosts file' strategy to verify ssh host keys, but your known_hosts file does not exist, please go to 'Manage Jenkins' -> 'Security' -> 'Git Host Key Verification Configuration' and configure host key verification. 15:28:52 > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/transportpce refs/changes/40/119740/15 # timeout=10 15:28:52 > git rev-parse ced55b56b66c0c423a4f021f095b8337d7a5930c^{commit} # timeout=10 15:28:52 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script 15:28:52 Checking out Revision ced55b56b66c0c423a4f021f095b8337d7a5930c (refs/changes/40/119740/15) 15:28:52 > git config core.sparsecheckout # timeout=10 15:28:52 > git checkout -f ced55b56b66c0c423a4f021f095b8337d7a5930c # timeout=10 15:28:55 Commit message: "Improved readability by creating helper methods" 15:28:55 > git rev-parse FETCH_HEAD^{commit} # timeout=10 15:28:55 > git rev-list --no-walk 0dbe05752bcb6e155f30a317b70e7b62a1cd55c4 # timeout=10 15:28:55 > git remote # timeout=10 15:28:55 > git submodule init # timeout=10 15:28:56 > git submodule sync # timeout=10 15:28:56 > git config --get remote.origin.url # timeout=10 15:28:56 > git submodule init # timeout=10 15:28:56 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 15:28:56 ERROR: No submodules found. 15:28:56 provisioning config files... 15:28:56 copy managed file [npmrc] to file:/home/jenkins/.npmrc 15:28:56 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 15:28:56 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins12332523664242431474.sh 15:28:56 ---> python-tools-install.sh 15:28:56 Setup pyenv: 15:28:56 * system (set by /opt/pyenv/version) 15:28:56 * 3.8.20 (set by /opt/pyenv/version) 15:28:56 * 3.9.20 (set by /opt/pyenv/version) 15:28:56 3.10.15 15:28:56 3.11.10 15:29:01 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-KPUl 15:29:01 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 15:29:01 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 15:29:01 lf-activate-venv(): INFO: Attempting to install with network-safe options... 15:29:05 lf-activate-venv(): INFO: Base packages installed successfully 15:29:05 lf-activate-venv(): INFO: Installing additional packages: lftools 15:29:39 lf-activate-venv(): INFO: Adding /tmp/venv-KPUl/bin to PATH 15:29:39 Generating Requirements File 15:29:59 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. 15:29:59 httplib2 0.30.2 requires pyparsing<4,>=3.0.4, but you have pyparsing 2.4.7 which is incompatible. 15:29:59 Python 3.11.10 15:29:59 pip 26.0.1 from /tmp/venv-KPUl/lib/python3.11/site-packages/pip (python 3.11) 15:30:00 appdirs==1.4.4 15:30:00 argcomplete==3.6.3 15:30:00 aspy.yaml==1.3.0 15:30:00 attrs==25.4.0 15:30:00 autopage==0.6.0 15:30:00 beautifulsoup4==4.14.3 15:30:00 boto3==1.42.50 15:30:00 botocore==1.42.50 15:30:00 bs4==0.0.2 15:30:00 certifi==2026.1.4 15:30:00 cffi==2.0.0 15:30:00 cfgv==3.5.0 15:30:00 chardet==5.2.0 15:30:00 charset-normalizer==3.4.4 15:30:00 click==8.3.1 15:30:00 cliff==4.13.2 15:30:00 cmd2==3.2.0 15:30:00 cryptography==3.3.2 15:30:00 debtcollector==3.0.0 15:30:00 decorator==5.2.1 15:30:00 defusedxml==0.7.1 15:30:00 Deprecated==1.3.1 15:30:00 distlib==0.4.0 15:30:00 dnspython==2.8.0 15:30:00 docker==7.1.0 15:30:00 dogpile.cache==1.5.0 15:30:00 durationpy==0.10 15:30:00 email-validator==2.3.0 15:30:00 filelock==3.24.2 15:30:00 future==1.0.0 15:30:00 gitdb==4.0.12 15:30:00 GitPython==3.1.46 15:30:00 httplib2==0.30.2 15:30:00 identify==2.6.16 15:30:00 idna==3.11 15:30:00 importlib-resources==1.5.0 15:30:00 iso8601==2.1.0 15:30:00 Jinja2==3.1.6 15:30:00 jmespath==1.1.0 15:30:00 jsonpatch==1.33 15:30:00 jsonpointer==3.0.0 15:30:00 jsonschema==4.26.0 15:30:00 jsonschema-specifications==2025.9.1 15:30:00 keystoneauth1==5.13.0 15:30:00 kubernetes==35.0.0 15:30:00 lftools==0.37.21 15:30:00 lxml==6.0.2 15:30:00 markdown-it-py==4.0.0 15:30:00 MarkupSafe==3.0.3 15:30:00 mdurl==0.1.2 15:30:00 msgpack==1.1.2 15:30:00 multi_key_dict==2.0.3 15:30:00 munch==4.0.0 15:30:00 netaddr==1.3.0 15:30:00 niet==1.4.2 15:30:00 nodeenv==1.10.0 15:30:00 oauth2client==4.1.3 15:30:00 oauthlib==3.3.1 15:30:00 openstacksdk==4.10.0 15:30:00 os-service-types==1.8.2 15:30:00 osc-lib==4.4.0 15:30:00 oslo.config==10.3.0 15:30:00 oslo.context==6.3.0 15:30:00 oslo.i18n==6.7.1 15:30:00 oslo.log==8.0.0 15:30:00 oslo.serialization==5.9.0 15:30:00 oslo.utils==9.2.0 15:30:00 packaging==26.0 15:30:00 pbr==7.0.3 15:30:00 platformdirs==4.9.2 15:30:00 prettytable==3.17.0 15:30:00 psutil==7.2.2 15:30:00 pyasn1==0.6.2 15:30:00 pyasn1_modules==0.4.2 15:30:00 pycparser==3.0 15:30:00 pygerrit2==2.0.15 15:30:00 PyGithub==2.8.1 15:30:00 Pygments==2.19.2 15:30:00 PyJWT==2.11.0 15:30:00 PyNaCl==1.6.2 15:30:00 pyparsing==2.4.7 15:30:00 pyperclip==1.11.0 15:30:00 pyrsistent==0.20.0 15:30:00 python-cinderclient==9.8.0 15:30:00 python-dateutil==2.9.0.post0 15:30:00 python-heatclient==5.0.0 15:30:00 python-jenkins==1.8.3 15:30:00 python-keystoneclient==5.7.0 15:30:00 python-magnumclient==4.9.0 15:30:00 python-openstackclient==9.0.0 15:30:00 python-swiftclient==4.9.0 15:30:00 PyYAML==6.0.3 15:30:00 referencing==0.37.0 15:30:00 requests==2.32.5 15:30:00 requests-oauthlib==2.0.0 15:30:00 requestsexceptions==1.4.0 15:30:00 rfc3986==2.0.0 15:30:00 rich==14.3.2 15:30:00 rich-argparse==1.7.2 15:30:00 rpds-py==0.30.0 15:30:00 rsa==4.9.1 15:30:00 ruamel.yaml==0.19.1 15:30:00 ruamel.yaml.clib==0.2.15 15:30:00 s3transfer==0.16.0 15:30:00 simplejson==3.20.2 15:30:00 six==1.17.0 15:30:00 smmap==5.0.2 15:30:00 soupsieve==2.8.3 15:30:00 stevedore==5.6.0 15:30:00 tabulate==0.9.0 15:30:00 toml==0.10.2 15:30:00 tomlkit==0.14.0 15:30:00 tqdm==4.67.3 15:30:00 typing_extensions==4.15.0 15:30:00 tzdata==2025.3 15:30:00 urllib3==1.26.20 15:30:00 virtualenv==20.37.0 15:30:00 wcwidth==0.6.0 15:30:00 websocket-client==1.9.0 15:30:00 wrapt==2.1.1 15:30:00 xdg==6.0.0 15:30:00 xmltodict==1.0.3 15:30:00 yq==3.4.3 15:30:00 [EnvInject] - Injecting environment variables from a build step. 15:30:00 [EnvInject] - Injecting as environment variables the properties content 15:30:00 PYTHON=python3 15:30:00 15:30:00 [EnvInject] - Variables injected successfully. 15:30:00 [transportpce-tox-verify-transportpce-master] $ /bin/bash -l /tmp/jenkins4888941604533850684.sh 15:30:00 ---> tox-install.sh 15:30:00 + source /home/jenkins/lf-env.sh 15:30:00 + lf-activate-venv --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 15:30:00 ++ mktemp -d /tmp/venv-XXXX 15:30:00 + lf_venv=/tmp/venv-mdFK 15:30:00 + local venv_file=/tmp/.os_lf_venv 15:30:00 + local python=python3 15:30:00 + local options 15:30:00 + local set_path=true 15:30:00 + local install_args= 15:30:00 ++ getopt -o np:v: -l no-path,system-site-packages,python:,venv-file: -n lf-activate-venv -- --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 15:30:00 + options=' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 15:30:00 + eval set -- ' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 15:30:00 ++ set -- --venv-file /tmp/.toxenv -- tox virtualenv urllib3~=1.26.15 15:30:00 + true 15:30:00 + case $1 in 15:30:00 + venv_file=/tmp/.toxenv 15:30:00 + shift 2 15:30:00 + true 15:30:00 + case $1 in 15:30:00 + shift 15:30:00 + break 15:30:00 + case $python in 15:30:00 + local pkg_list= 15:30:00 + [[ -d /opt/pyenv ]] 15:30:00 + echo 'Setup pyenv:' 15:30:00 Setup pyenv: 15:30:00 + export PYENV_ROOT=/opt/pyenv 15:30:00 + PYENV_ROOT=/opt/pyenv 15:30:00 + export PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 15:30:00 + PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 15:30:00 + pyenv versions 15:30:00 system 15:30:00 3.8.20 15:30:00 3.9.20 15:30:00 3.10.15 15:30:00 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 15:30:00 + command -v pyenv 15:30:00 ++ pyenv init - --no-rehash 15:30:00 + eval 'PATH="$(bash --norc -ec '\''IFS=:; paths=($PATH); 15:30:00 for i in ${!paths[@]}; do 15:30:00 if [[ ${paths[i]} == "'\'''\''/opt/pyenv/shims'\'''\''" ]]; then unset '\''\'\'''\''paths[i]'\''\'\'''\''; 15:30:00 fi; done; 15:30:00 echo "${paths[*]}"'\'')" 15:30:00 export PATH="/opt/pyenv/shims:${PATH}" 15:30:00 export PYENV_SHELL=bash 15:30:00 source '\''/opt/pyenv/libexec/../completions/pyenv.bash'\'' 15:30:00 pyenv() { 15:30:00 local command 15:30:00 command="${1:-}" 15:30:00 if [ "$#" -gt 0 ]; then 15:30:00 shift 15:30:00 fi 15:30:00 15:30:00 case "$command" in 15:30:00 rehash|shell) 15:30:00 eval "$(pyenv "sh-$command" "$@")" 15:30:00 ;; 15:30:00 *) 15:30:00 command pyenv "$command" "$@" 15:30:00 ;; 15:30:00 esac 15:30:00 }' 15:30:00 +++ bash --norc -ec 'IFS=:; paths=($PATH); 15:30:00 for i in ${!paths[@]}; do 15:30:00 if [[ ${paths[i]} == "/opt/pyenv/shims" ]]; then unset '\''paths[i]'\''; 15:30:00 fi; done; 15:30:00 echo "${paths[*]}"' 15:30:00 ++ PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 15:30:00 ++ export PATH=/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 15:30:00 ++ PATH=/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 15:30:00 ++ export PYENV_SHELL=bash 15:30:00 ++ PYENV_SHELL=bash 15:30:00 ++ source /opt/pyenv/libexec/../completions/pyenv.bash 15:30:00 +++ complete -F _pyenv pyenv 15:30:00 ++ lf-pyver python3 15:30:00 ++ local py_version_xy=python3 15:30:00 ++ local py_version_xyz= 15:30:00 ++ pyenv versions 15:30:00 ++ local command 15:30:00 ++ command=versions 15:30:00 ++ '[' 1 -gt 0 ']' 15:30:00 ++ shift 15:30:00 ++ case "$command" in 15:30:00 ++ command pyenv versions 15:30:00 ++ sed 's/^[ *]* //' 15:30:00 ++ awk '{ print $1 }' 15:30:00 ++ grep -E '^[0-9.]*[0-9]$' 15:30:00 ++ [[ ! -s /tmp/.pyenv_versions ]] 15:30:00 +++ grep '^3' /tmp/.pyenv_versions 15:30:00 +++ sort -V 15:30:00 +++ tail -n 1 15:30:00 ++ py_version_xyz=3.11.10 15:30:00 ++ [[ -z 3.11.10 ]] 15:30:00 ++ echo 3.11.10 15:30:00 ++ return 0 15:30:00 + pyenv local 3.11.10 15:30:00 + local command 15:30:00 + command=local 15:30:00 + '[' 2 -gt 0 ']' 15:30:00 + shift 15:30:00 + case "$command" in 15:30:00 + command pyenv local 3.11.10 15:30:00 + for arg in "$@" 15:30:00 + case $arg in 15:30:00 + pkg_list+='tox ' 15:30:00 + for arg in "$@" 15:30:00 + case $arg in 15:30:00 + pkg_list+='virtualenv ' 15:30:00 + for arg in "$@" 15:30:00 + case $arg in 15:30:00 + pkg_list+='urllib3~=1.26.15 ' 15:30:00 + [[ -f /tmp/.toxenv ]] 15:30:00 + [[ ! -f /tmp/.toxenv ]] 15:30:00 + [[ -n '' ]] 15:30:00 + python3 -m venv /tmp/venv-mdFK 15:30:04 + echo 'lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-mdFK' 15:30:04 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-mdFK 15:30:04 + echo /tmp/venv-mdFK 15:30:04 + echo 'lf-activate-venv(): INFO: Save venv in file: /tmp/.toxenv' 15:30:04 lf-activate-venv(): INFO: Save venv in file: /tmp/.toxenv 15:30:04 + echo 'lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv)' 15:30:04 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 15:30:04 + local 'pip_opts=--upgrade --quiet' 15:30:04 + pip_opts='--upgrade --quiet --trusted-host pypi.org' 15:30:04 + pip_opts='--upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org' 15:30:04 + pip_opts='--upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org --trusted-host pypi.python.org' 15:30:04 + [[ -n '' ]] 15:30:04 + [[ -n '' ]] 15:30:04 + echo 'lf-activate-venv(): INFO: Attempting to install with network-safe options...' 15:30:04 lf-activate-venv(): INFO: Attempting to install with network-safe options... 15:30:04 + /tmp/venv-mdFK/bin/python3 -m pip install --upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org --trusted-host pypi.python.org pip 'setuptools<66' virtualenv 15:30:08 + echo 'lf-activate-venv(): INFO: Base packages installed successfully' 15:30:08 lf-activate-venv(): INFO: Base packages installed successfully 15:30:08 + [[ -z tox virtualenv urllib3~=1.26.15 ]] 15:30:08 + echo 'lf-activate-venv(): INFO: Installing additional packages: tox virtualenv urllib3~=1.26.15 ' 15:30:08 lf-activate-venv(): INFO: Installing additional packages: tox virtualenv urllib3~=1.26.15 15:30:08 + /tmp/venv-mdFK/bin/python3 -m pip install --upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org --trusted-host pypi.python.org --upgrade-strategy eager tox virtualenv urllib3~=1.26.15 15:30:10 + type python3 15:30:10 + true 15:30:10 + echo 'lf-activate-venv(): INFO: Adding /tmp/venv-mdFK/bin to PATH' 15:30:10 lf-activate-venv(): INFO: Adding /tmp/venv-mdFK/bin to PATH 15:30:10 + PATH=/tmp/venv-mdFK/bin:/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 15:30:10 + return 0 15:30:10 + python3 --version 15:30:10 Python 3.11.10 15:30:10 + python3 -m pip --version 15:30:11 pip 26.0.1 from /tmp/venv-mdFK/lib/python3.11/site-packages/pip (python 3.11) 15:30:11 + python3 -m pip freeze 15:30:11 cachetools==7.0.1 15:30:11 chardet==5.2.0 15:30:11 colorama==0.4.6 15:30:11 distlib==0.4.0 15:30:11 filelock==3.24.2 15:30:11 packaging==26.0 15:30:11 platformdirs==4.9.2 15:30:11 pluggy==1.6.0 15:30:11 pyproject-api==1.10.0 15:30:11 tox==4.36.1 15:30:11 urllib3==1.26.20 15:30:11 virtualenv==20.37.0 15:30:11 [transportpce-tox-verify-transportpce-master] $ /bin/sh -xe /tmp/jenkins16397584054818405697.sh 15:30:11 [EnvInject] - Injecting environment variables from a build step. 15:30:11 [EnvInject] - Injecting as environment variables the properties content 15:30:11 PARALLEL=True 15:30:11 15:30:11 [EnvInject] - Variables injected successfully. 15:30:11 [transportpce-tox-verify-transportpce-master] $ /bin/bash -l /tmp/jenkins17809847016758032441.sh 15:30:11 ---> tox-run.sh 15:30:11 + PATH=/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 15:30:11 + ARCHIVE_TOX_DIR=/w/workspace/transportpce-tox-verify-transportpce-master/archives/tox 15:30:11 + ARCHIVE_DOC_DIR=/w/workspace/transportpce-tox-verify-transportpce-master/archives/docs 15:30:11 + mkdir -p /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox 15:30:11 + cd /w/workspace/transportpce-tox-verify-transportpce-master/. 15:30:11 + source /home/jenkins/lf-env.sh 15:30:11 + lf-activate-venv --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 15:30:11 ++ mktemp -d /tmp/venv-XXXX 15:30:11 + lf_venv=/tmp/venv-QcaH 15:30:11 + local venv_file=/tmp/.os_lf_venv 15:30:11 + local python=python3 15:30:11 + local options 15:30:11 + local set_path=true 15:30:11 + local install_args= 15:30:11 ++ getopt -o np:v: -l no-path,system-site-packages,python:,venv-file: -n lf-activate-venv -- --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 15:30:11 + options=' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 15:30:11 + eval set -- ' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' 15:30:11 ++ set -- --venv-file /tmp/.toxenv -- tox virtualenv urllib3~=1.26.15 15:30:11 + true 15:30:11 + case $1 in 15:30:11 + venv_file=/tmp/.toxenv 15:30:11 + shift 2 15:30:11 + true 15:30:11 + case $1 in 15:30:11 + shift 15:30:11 + break 15:30:11 + case $python in 15:30:11 + local pkg_list= 15:30:11 + [[ -d /opt/pyenv ]] 15:30:11 + echo 'Setup pyenv:' 15:30:11 Setup pyenv: 15:30:11 + export PYENV_ROOT=/opt/pyenv 15:30:11 + PYENV_ROOT=/opt/pyenv 15:30:11 + export PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 15:30:11 + PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 15:30:11 + pyenv versions 15:30:11 system 15:30:11 3.8.20 15:30:11 3.9.20 15:30:11 3.10.15 15:30:11 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 15:30:11 + command -v pyenv 15:30:11 ++ pyenv init - --no-rehash 15:30:11 + eval 'PATH="$(bash --norc -ec '\''IFS=:; paths=($PATH); 15:30:11 for i in ${!paths[@]}; do 15:30:11 if [[ ${paths[i]} == "'\'''\''/opt/pyenv/shims'\'''\''" ]]; then unset '\''\'\'''\''paths[i]'\''\'\'''\''; 15:30:11 fi; done; 15:30:11 echo "${paths[*]}"'\'')" 15:30:11 export PATH="/opt/pyenv/shims:${PATH}" 15:30:11 export PYENV_SHELL=bash 15:30:11 source '\''/opt/pyenv/libexec/../completions/pyenv.bash'\'' 15:30:11 pyenv() { 15:30:11 local command 15:30:11 command="${1:-}" 15:30:11 if [ "$#" -gt 0 ]; then 15:30:11 shift 15:30:11 fi 15:30:11 15:30:11 case "$command" in 15:30:11 rehash|shell) 15:30:11 eval "$(pyenv "sh-$command" "$@")" 15:30:11 ;; 15:30:11 *) 15:30:11 command pyenv "$command" "$@" 15:30:11 ;; 15:30:11 esac 15:30:11 }' 15:30:11 +++ bash --norc -ec 'IFS=:; paths=($PATH); 15:30:11 for i in ${!paths[@]}; do 15:30:11 if [[ ${paths[i]} == "/opt/pyenv/shims" ]]; then unset '\''paths[i]'\''; 15:30:11 fi; done; 15:30:11 echo "${paths[*]}"' 15:30:11 ++ PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 15:30:11 ++ export PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 15:30:11 ++ PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 15:30:11 ++ export PYENV_SHELL=bash 15:30:11 ++ PYENV_SHELL=bash 15:30:11 ++ source /opt/pyenv/libexec/../completions/pyenv.bash 15:30:11 +++ complete -F _pyenv pyenv 15:30:11 ++ lf-pyver python3 15:30:11 ++ local py_version_xy=python3 15:30:11 ++ local py_version_xyz= 15:30:11 ++ pyenv versions 15:30:11 ++ local command 15:30:11 ++ command=versions 15:30:11 ++ '[' 1 -gt 0 ']' 15:30:11 ++ shift 15:30:11 ++ sed 's/^[ *]* //' 15:30:11 ++ case "$command" in 15:30:11 ++ command pyenv versions 15:30:11 ++ grep -E '^[0-9.]*[0-9]$' 15:30:11 ++ awk '{ print $1 }' 15:30:11 ++ [[ ! -s /tmp/.pyenv_versions ]] 15:30:11 +++ grep '^3' /tmp/.pyenv_versions 15:30:11 +++ sort -V 15:30:11 +++ tail -n 1 15:30:11 ++ py_version_xyz=3.11.10 15:30:11 ++ [[ -z 3.11.10 ]] 15:30:11 ++ echo 3.11.10 15:30:11 ++ return 0 15:30:11 + pyenv local 3.11.10 15:30:11 + local command 15:30:11 + command=local 15:30:11 + '[' 2 -gt 0 ']' 15:30:11 + shift 15:30:11 + case "$command" in 15:30:11 + command pyenv local 3.11.10 15:30:11 + for arg in "$@" 15:30:11 + case $arg in 15:30:11 + pkg_list+='tox ' 15:30:11 + for arg in "$@" 15:30:11 + case $arg in 15:30:11 + pkg_list+='virtualenv ' 15:30:11 + for arg in "$@" 15:30:11 + case $arg in 15:30:11 + pkg_list+='urllib3~=1.26.15 ' 15:30:11 + [[ -f /tmp/.toxenv ]] 15:30:11 ++ cat /tmp/.toxenv 15:30:11 + lf_venv=/tmp/venv-mdFK 15:30:11 + echo 'lf-activate-venv(): INFO: Reuse venv:/tmp/venv-mdFK from' file:/tmp/.toxenv 15:30:11 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-mdFK from file:/tmp/.toxenv 15:30:11 + echo 'lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv)' 15:30:11 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 15:30:11 + local 'pip_opts=--upgrade --quiet' 15:30:11 + pip_opts='--upgrade --quiet --trusted-host pypi.org' 15:30:11 + pip_opts='--upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org' 15:30:11 + pip_opts='--upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org --trusted-host pypi.python.org' 15:30:11 + [[ -n '' ]] 15:30:11 + [[ -n '' ]] 15:30:11 + echo 'lf-activate-venv(): INFO: Attempting to install with network-safe options...' 15:30:11 lf-activate-venv(): INFO: Attempting to install with network-safe options... 15:30:11 + /tmp/venv-mdFK/bin/python3 -m pip install --upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org --trusted-host pypi.python.org pip 'setuptools<66' virtualenv 15:30:12 + echo 'lf-activate-venv(): INFO: Base packages installed successfully' 15:30:12 lf-activate-venv(): INFO: Base packages installed successfully 15:30:12 + [[ -z tox virtualenv urllib3~=1.26.15 ]] 15:30:12 + echo 'lf-activate-venv(): INFO: Installing additional packages: tox virtualenv urllib3~=1.26.15 ' 15:30:12 lf-activate-venv(): INFO: Installing additional packages: tox virtualenv urllib3~=1.26.15 15:30:12 + /tmp/venv-mdFK/bin/python3 -m pip install --upgrade --quiet --trusted-host pypi.org --trusted-host files.pythonhosted.org --trusted-host pypi.python.org --upgrade-strategy eager tox virtualenv urllib3~=1.26.15 15:30:13 + type python3 15:30:13 + true 15:30:13 + echo 'lf-activate-venv(): INFO: Adding /tmp/venv-mdFK/bin to PATH' 15:30:13 lf-activate-venv(): INFO: Adding /tmp/venv-mdFK/bin to PATH 15:30:13 + PATH=/tmp/venv-mdFK/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 15:30:13 + return 0 15:30:13 + [[ -d /opt/pyenv ]] 15:30:13 + echo '---> Setting up pyenv' 15:30:13 ---> Setting up pyenv 15:30:13 + export PYENV_ROOT=/opt/pyenv 15:30:13 + PYENV_ROOT=/opt/pyenv 15:30:13 + export PATH=/opt/pyenv/bin:/tmp/venv-mdFK/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 15:30:13 + PATH=/opt/pyenv/bin:/tmp/venv-mdFK/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin 15:30:13 ++ pwd 15:30:13 + PYTHONPATH=/w/workspace/transportpce-tox-verify-transportpce-master 15:30:13 + export PYTHONPATH 15:30:13 + export TOX_TESTENV_PASSENV=PYTHONPATH 15:30:13 + TOX_TESTENV_PASSENV=PYTHONPATH 15:30:13 + tox --version 15:30:13 4.36.1 from /tmp/venv-mdFK/lib/python3.11/site-packages/tox/__init__.py 15:30:13 + PARALLEL=True 15:30:13 + TOX_OPTIONS_LIST= 15:30:13 + [[ -n '' ]] 15:30:13 + case ${PARALLEL,,} in 15:30:13 + TOX_OPTIONS_LIST=' --parallel auto --parallel-live' 15:30:13 + tox --parallel auto --parallel-live 15:30:13 + tee -a /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tox.log 15:30:15 docs-linkcheck: install_deps> python -I -m pip install -r docs/requirements.txt 15:30:15 docs: install_deps> python -I -m pip install -r docs/requirements.txt 15:30:15 checkbashisms: freeze> python -m pip freeze --all 15:30:15 buildcontroller: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 15:30:16 checkbashisms: pip==26.0.1,setuptools==82.0.0 15:30:16 checkbashisms: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./fixCIcentOS8reposMirrors.sh 15:30:16 checkbashisms: commands[1] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sh -c 'command checkbashisms>/dev/null || sudo yum install -y devscripts-checkbashisms || sudo yum install -y devscripts-minimal || sudo yum install -y devscripts || sudo yum install -y https://archives.fedoraproject.org/pub/archive/fedora/linux/releases/31/Everything/x86_64/os/Packages/d/devscripts-checkbashisms-2.19.6-2.fc31.x86_64.rpm || (echo "checkbashisms command not found - please install it (e.g. sudo apt-get install devscripts | yum install devscripts-minimal )" >&2 && exit 1)' 15:30:16 checkbashisms: commands[2] /w/workspace/transportpce-tox-verify-transportpce-master/tests> find . -not -path '*/\.*' -name '*.sh' -exec checkbashisms -f '{}' + 15:30:17 checkbashisms: OK ✔ in 3.15 seconds 15:30:17 pre-commit: install_deps> python -I -m pip install pre-commit 15:30:20 pre-commit: freeze> python -m pip freeze --all 15:30:20 pre-commit: cfgv==3.5.0,distlib==0.4.0,filelock==3.24.2,identify==2.6.16,nodeenv==1.10.0,pip==26.0.1,platformdirs==4.9.2,pre_commit==4.5.1,PyYAML==6.0.3,setuptools==82.0.0,virtualenv==20.37.0 15:30:20 pre-commit: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./fixCIcentOS8reposMirrors.sh 15:30:20 pre-commit: commands[1] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sh -c 'which cpan || sudo yum install -y perl-CPAN || (echo "cpan command not found - please install it (e.g. sudo apt-get install perl-modules | yum install perl-CPAN )" >&2 && exit 1)' 15:30:20 /usr/bin/cpan 15:30:20 pre-commit: commands[2] /w/workspace/transportpce-tox-verify-transportpce-master/tests> pre-commit run --all-files --show-diff-on-failure 15:30:20 [WARNING] hook id `remove-tabs` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 15:30:20 [WARNING] hook id `perltidy` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 15:30:20 [INFO] Initializing environment for https://github.com/pre-commit/pre-commit-hooks. 15:30:21 [WARNING] repo `https://github.com/pre-commit/pre-commit-hooks` uses deprecated stage names (commit, push) which will be removed in a future version. Hint: often `pre-commit autoupdate --repo https://github.com/pre-commit/pre-commit-hooks` will fix this. if it does not -- consider reporting an issue to that repo. 15:30:21 [INFO] Initializing environment for https://github.com/jorisroovers/gitlint. 15:30:21 [INFO] Initializing environment for https://github.com/jorisroovers/gitlint:./gitlint-core[trusted-deps]. 15:30:21 [INFO] Initializing environment for https://github.com/Lucas-C/pre-commit-hooks. 15:30:22 [INFO] Initializing environment for https://github.com/pre-commit/mirrors-autopep8. 15:30:22 buildcontroller: freeze> python -m pip freeze --all 15:30:22 buildcontroller: bcrypt==5.0.0,certifi==2026.1.4,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 15:30:22 buildcontroller: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_controller.sh 15:30:22 + update-java-alternatives -l 15:30:22 java-1.11.0-openjdk-amd64 1111 /usr/lib/jvm/java-1.11.0-openjdk-amd64 15:30:22 java-1.17.0-openjdk-amd64 1711 /usr/lib/jvm/java-1.17.0-openjdk-amd64 15:30:22 java-1.21.0-openjdk-amd64 2111 /usr/lib/jvm/java-1.21.0-openjdk-amd64 15:30:22 + sudo update-java-alternatives -s java-1.21.0-openjdk-amd64 15:30:22 update-alternatives: error: no alternatives for jaotc 15:30:22 [INFO] Initializing environment for https://github.com/perltidy/perltidy. 15:30:22 update-alternatives: error: no alternatives for rmic 15:30:23 + java -version 15:30:23 + sed -n ;s/.* version "\(.*\)\.\(.*\)\..*".*$/\1/p; 15:30:23 + JAVA_VER=21 15:30:23 + echo 21 15:30:23 21 15:30:23 + javac -version 15:30:23 + sed -n ;s/javac \(.*\)\.\(.*\)\..*.*$/\1/p; 15:30:23 + JAVAC_VER=21 15:30:23 + echo 21 15:30:23 + [ 21 -ge 21 ] 15:30:23 21 15:30:23 ok, java is 21 or newer 15:30:23 + [ 21 -ge 21 ] 15:30:23 + echo ok, java is 21 or newer 15:30:23 + wget -nv https://dlcdn.apache.org/maven/maven-3/3.9.12/binaries/apache-maven-3.9.12-bin.tar.gz -P /tmp 15:30:23 2026-02-17 15:30:23 URL:https://dlcdn.apache.org/maven/maven-3/3.9.12/binaries/apache-maven-3.9.12-bin.tar.gz [9233336/9233336] -> "/tmp/apache-maven-3.9.12-bin.tar.gz" [1] 15:30:23 + sudo mkdir -p /opt 15:30:23 + sudo tar xf /tmp/apache-maven-3.9.12-bin.tar.gz -C /opt 15:30:23 + sudo ln -s /opt/apache-maven-3.9.12 /opt/maven 15:30:23 + sudo ln -s /opt/maven/bin/mvn /usr/bin/mvn 15:30:23 + mvn --version 15:30:23 [INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks. 15:30:23 [INFO] Once installed this environment will be reused. 15:30:23 [INFO] This may take a few minutes... 15:30:23 Apache Maven 3.9.12 (848fbb4bf2d427b72bdb2471c22fced7ebd9a7a1) 15:30:23 Maven home: /opt/maven 15:30:23 Java version: 21.0.9, vendor: Ubuntu, runtime: /usr/lib/jvm/java-21-openjdk-amd64 15:30:23 Default locale: en, platform encoding: UTF-8 15:30:23 OS name: "linux", version: "5.15.0-168-generic", arch: "amd64", family: "unix" 15:30:23 NOTE: Picked up JDK_JAVA_OPTIONS: 15:30:23 --add-opens=java.base/java.io=ALL-UNNAMED 15:30:23 --add-opens=java.base/java.lang=ALL-UNNAMED 15:30:23 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 15:30:23 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 15:30:23 --add-opens=java.base/java.net=ALL-UNNAMED 15:30:23 --add-opens=java.base/java.nio=ALL-UNNAMED 15:30:23 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 15:30:23 --add-opens=java.base/java.nio.file=ALL-UNNAMED 15:30:23 --add-opens=java.base/java.util=ALL-UNNAMED 15:30:23 --add-opens=java.base/java.util.jar=ALL-UNNAMED 15:30:23 --add-opens=java.base/java.util.stream=ALL-UNNAMED 15:30:23 --add-opens=java.base/java.util.zip=ALL-UNNAMED 15:30:23 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 15:30:23 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 15:30:23 -Xlog:disable 15:30:28 [INFO] Installing environment for https://github.com/Lucas-C/pre-commit-hooks. 15:30:28 [INFO] Once installed this environment will be reused. 15:30:28 [INFO] This may take a few minutes... 15:30:36 [INFO] Installing environment for https://github.com/pre-commit/mirrors-autopep8. 15:30:36 [INFO] Once installed this environment will be reused. 15:30:36 [INFO] This may take a few minutes... 15:30:42 [INFO] Installing environment for https://github.com/perltidy/perltidy. 15:30:42 [INFO] Once installed this environment will be reused. 15:30:42 [INFO] This may take a few minutes... 15:30:43 docs: freeze> python -m pip freeze --all 15:30:43 docs-linkcheck: freeze> python -m pip freeze --all 15:30:44 docs: alabaster==1.0.0,attrs==25.4.0,babel==2.18.0,blockdiag==3.0.0,certifi==2026.1.4,charset-normalizer==3.4.4,contourpy==1.3.3,cycler==0.12.1,docutils==0.21.2,fonttools==4.61.1,funcparserlib==2.0.0a0,future==1.0.0,idna==3.11,imagesize==1.4.1,Jinja2==3.1.6,jsonschema==3.2.0,kiwisolver==1.4.9,lfdocs_conf==0.10.0,MarkupSafe==3.0.3,matplotlib==3.10.8,numpy==2.4.2,nwdiag==3.0.0,packaging==26.0,pillow==12.1.1,pip==26.0.1,Pygments==2.19.2,pyparsing==3.3.2,pyrsistent==0.20.0,python-dateutil==2.9.0.post0,PyYAML==6.0.3,requests==2.32.5,requests-file==1.5.1,roman-numerals==4.1.0,roman-numerals-py==4.1.0,seqdiag==3.0.0,setuptools==82.0.0,six==1.17.0,snowballstemmer==3.0.1,Sphinx==8.2.3,sphinx-bootstrap-theme==0.8.1,sphinx-data-viewer==0.1.5,sphinx-tabs==3.4.7,sphinx_rtd_theme==3.1.0,sphinxcontrib-applehelp==2.0.0,sphinxcontrib-blockdiag==3.0.0,sphinxcontrib-devhelp==2.0.0,sphinxcontrib-htmlhelp==2.1.0,sphinxcontrib-jquery==4.1,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-needs==0.7.9,sphinxcontrib-nwdiag==2.0.0,sphinxcontrib-plantuml==0.31,sphinxcontrib-qthelp==2.0.0,sphinxcontrib-seqdiag==3.0.0,sphinxcontrib-serializinghtml==2.0.0,sphinxcontrib-swaggerdoc==0.1.7,urllib3==2.6.3,webcolors==25.10.0 15:30:44 docs: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sphinx-build -q -W --keep-going -b html -n -d /w/workspace/transportpce-tox-verify-transportpce-master/.tox/docs/tmp/doctrees ../docs/ /w/workspace/transportpce-tox-verify-transportpce-master/docs/_build/html 15:30:44 docs-linkcheck: alabaster==1.0.0,attrs==25.4.0,babel==2.18.0,blockdiag==3.0.0,certifi==2026.1.4,charset-normalizer==3.4.4,contourpy==1.3.3,cycler==0.12.1,docutils==0.21.2,fonttools==4.61.1,funcparserlib==2.0.0a0,future==1.0.0,idna==3.11,imagesize==1.4.1,Jinja2==3.1.6,jsonschema==3.2.0,kiwisolver==1.4.9,lfdocs_conf==0.10.0,MarkupSafe==3.0.3,matplotlib==3.10.8,numpy==2.4.2,nwdiag==3.0.0,packaging==26.0,pillow==12.1.1,pip==26.0.1,Pygments==2.19.2,pyparsing==3.3.2,pyrsistent==0.20.0,python-dateutil==2.9.0.post0,PyYAML==6.0.3,requests==2.32.5,requests-file==1.5.1,roman-numerals==4.1.0,roman-numerals-py==4.1.0,seqdiag==3.0.0,setuptools==82.0.0,six==1.17.0,snowballstemmer==3.0.1,Sphinx==8.2.3,sphinx-bootstrap-theme==0.8.1,sphinx-data-viewer==0.1.5,sphinx-tabs==3.4.7,sphinx_rtd_theme==3.1.0,sphinxcontrib-applehelp==2.0.0,sphinxcontrib-blockdiag==3.0.0,sphinxcontrib-devhelp==2.0.0,sphinxcontrib-htmlhelp==2.1.0,sphinxcontrib-jquery==4.1,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-needs==0.7.9,sphinxcontrib-nwdiag==2.0.0,sphinxcontrib-plantuml==0.31,sphinxcontrib-qthelp==2.0.0,sphinxcontrib-seqdiag==3.0.0,sphinxcontrib-serializinghtml==2.0.0,sphinxcontrib-swaggerdoc==0.1.7,urllib3==2.6.3,webcolors==25.10.0 15:30:44 docs-linkcheck: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sphinx-build -q -b linkcheck -d /w/workspace/transportpce-tox-verify-transportpce-master/.tox/docs-linkcheck/tmp/doctrees ../docs/ /w/workspace/transportpce-tox-verify-transportpce-master/docs/_build/linkcheck 15:30:47 docs: OK ✔ in 33.5 seconds 15:30:47 pylint: install_deps> python -I -m pip install 'pylint>=2.6.0' 15:30:54 trim trailing whitespace.................................................Passed 15:30:54 Tabs remover.............................................................Passed 15:30:54 autopep8.................................................................docs-linkcheck: OK ✔ in 35.39 seconds 15:30:55 pylint: freeze> python -m pip freeze --all 15:30:55 pylint: astroid==4.0.4,dill==0.4.1,isort==7.0.0,mccabe==0.7.0,pip==26.0.1,platformdirs==4.9.2,pylint==4.0.4,setuptools==82.0.0,tomlkit==0.14.0 15:30:55 pylint: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> find transportpce_tests/ -name '*.py' -exec pylint --fail-under=10 --max-line-length=120 --disable=missing-docstring,import-error --disable=fixme --disable=duplicate-code '--module-rgx=([a-z0-9_]+$)|([0-9.]{1,30}$)' '--method-rgx=(([a-z_][a-zA-Z0-9_]{2,})|(_[a-z0-9_]*)|(__[a-zA-Z][a-zA-Z0-9_]+__))$' '--variable-rgx=[a-zA-Z_][a-zA-Z0-9_]{1,30}$' '{}' + 15:31:00 Passed 15:31:00 perltidy.................................................................Passed 15:31:01 pre-commit: commands[3] /w/workspace/transportpce-tox-verify-transportpce-master/tests> pre-commit run gitlint-ci --hook-stage manual 15:31:01 [WARNING] hook id `remove-tabs` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 15:31:01 [WARNING] hook id `perltidy` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. 15:31:01 [INFO] Installing environment for https://github.com/jorisroovers/gitlint. 15:31:01 [INFO] Once installed this environment will be reused. 15:31:01 [INFO] This may take a few minutes... 15:31:09 gitlint..................................................................Passed 15:31:19 15:31:19 ------------------------------------ 15:31:19 Your code has been rated at 10.00/10 15:31:19 15:32:07 pre-commit: OK ✔ in 52.2 seconds 15:32:07 pylint: OK ✔ in 33.42 seconds 15:32:07 buildcontroller: OK ✔ in 1 minute 52.37 seconds 15:32:07 build_karaf_tests190: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 15:32:07 build_karaf_tests71: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 15:32:07 build_karaf_tests221: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 15:32:07 build_karaf_tests121: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 15:32:15 build_karaf_tests121: freeze> python -m pip freeze --all 15:32:15 build_karaf_tests221: freeze> python -m pip freeze --all 15:32:15 build_karaf_tests190: freeze> python -m pip freeze --all 15:32:15 build_karaf_tests71: freeze> python -m pip freeze --all 15:32:16 build_karaf_tests121: bcrypt==5.0.0,certifi==2026.1.4,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 15:32:16 build_karaf_tests121: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 15:32:16 build_karaf_tests221: bcrypt==5.0.0,certifi==2026.1.4,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 15:32:16 build_karaf_tests221: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 15:32:16 build_karaf_tests190: bcrypt==5.0.0,certifi==2026.1.4,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 15:32:16 build_karaf_tests190: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 15:32:16 build karaf in karaf121 with ./karaf121.env 15:32:16 build karaf in karaf221 with ./karaf221.env 15:32:16 build karaf in karafoc with ./karafoc.env 15:32:16 NOTE: Picked up JDK_JAVA_OPTIONS: 15:32:16 --add-opens=java.base/java.io=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.lang=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.net=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.nio=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.nio.file=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.util=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.util.jar=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.util.stream=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.util.zip=ALL-UNNAMED 15:32:16 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 15:32:16 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 15:32:16 -Xlog:disable 15:32:16 NOTE: Picked up JDK_JAVA_OPTIONS: 15:32:16 --add-opens=java.base/java.io=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.lang=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.net=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.nio=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.nio.file=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.util=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.util.jar=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.util.stream=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.util.zip=ALL-UNNAMED 15:32:16 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 15:32:16 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 15:32:16 -Xlog:disable 15:32:16 NOTE: Picked up JDK_JAVA_OPTIONS: 15:32:16 --add-opens=java.base/java.io=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.lang=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.net=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.nio=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.nio.file=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.util=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.util.jar=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.util.stream=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.util.zip=ALL-UNNAMED 15:32:16 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 15:32:16 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 15:32:16 -Xlog:disable 15:32:16 build_karaf_tests71: bcrypt==5.0.0,certifi==2026.1.4,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 15:32:16 build_karaf_tests71: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh 15:32:16 build karaf in karaf71 with ./karaf71.env 15:32:16 NOTE: Picked up JDK_JAVA_OPTIONS: 15:32:16 --add-opens=java.base/java.io=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.lang=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.lang.invoke=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.lang.reflect=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.net=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.nio=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.nio.charset=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.nio.file=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.util=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.util.jar=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.util.stream=ALL-UNNAMED 15:32:16 --add-opens=java.base/java.util.zip=ALL-UNNAMED 15:32:16 --add-opens java.base/sun.nio.ch=ALL-UNNAMED 15:32:16 --add-opens java.base/sun.nio.fs=ALL-UNNAMED 15:32:16 -Xlog:disable 15:33:20 build_karaf_tests190: OK ✔ in 1 minute 14.09 seconds 15:33:20 buildlighty: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 15:33:25 build_karaf_tests221: OK ✔ in 1 minute 18.39 seconds 15:33:25 sims: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 15:33:25 build_karaf_tests71: OK ✔ in 1 minute 19.28 seconds 15:33:25 build_karaf_tests121: OK ✔ in 1 minute 19.29 seconds 15:33:25 testsPCE: install_deps> python -I -m pip install gnpy4tpce==2.4.7 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt 15:33:31 buildlighty: freeze> python -m pip freeze --all 15:33:31 buildlighty: bcrypt==5.0.0,certifi==2026.1.4,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 15:33:31 buildlighty: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/lighty> ./build.sh 15:33:31 sims: freeze> python -m pip freeze --all 15:33:31 NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED 15:33:32 sims: bcrypt==5.0.0,certifi==2026.1.4,cffi==2.0.0,charset-normalizer==3.4.4,cryptography==46.0.5,dict2xml==1.7.8,idna==3.11,iniconfig==2.3.0,invoke==2.2.1,lxml==6.0.2,netconf-client==3.5.0,packaging==26.0,paramiko==4.0.0,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pytest==9.0.2,requests==2.32.5,setuptools==82.0.0,urllib3==2.6.3 15:33:32 sims: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./install_lightynode.sh 15:33:32 Using lighynode version 22.1.0.6 15:33:32 Installing lightynode device to ./lightynode/lightynode-openroadm-device directory 15:34:21 sims: OK ✔ in 10.55 seconds 15:34:21 buildlighty: OK ✔ in 34.11 seconds 15:34:21 testsPCE: freeze> python -m pip freeze --all 15:34:21 testsPCE: bcrypt==5.0.0,certifi==2026.1.4,cffi==2.0.0,charset-normalizer==3.4.4,click==8.3.1,contourpy==1.3.3,cryptography==3.3.2,cycler==0.12.1,dict2xml==1.7.8,Flask==2.1.3,Flask-Injector==0.14.0,fonttools==4.61.1,gnpy4tpce==2.4.7,idna==3.11,iniconfig==2.3.0,injector==0.24.0,invoke==2.2.1,itsdangerous==2.2.0,Jinja2==3.1.6,kiwisolver==1.4.9,lxml==6.0.2,MarkupSafe==3.0.3,matplotlib==3.10.8,netconf-client==3.5.0,networkx==2.8.8,numpy==1.26.4,packaging==26.0,pandas==1.5.3,paramiko==4.0.0,pbr==5.11.1,pillow==12.1.1,pip==26.0.1,pluggy==1.6.0,psutil==7.2.2,pycparser==3.0,Pygments==2.19.2,PyNaCl==1.6.2,pyparsing==3.3.2,pytest==9.0.2,python-dateutil==2.9.0.post0,pytz==2025.2,requests==2.32.5,scipy==1.17.0,setuptools==50.3.2,six==1.17.0,urllib3==2.6.3,Werkzeug==2.0.3,xlrd==1.2.0 15:34:21 testsPCE: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh pce 15:34:21 pytest -q transportpce_tests/pce/test01_pce.py 15:35:09 .................... [100%] 15:36:12 20 passed in 110.65s (0:01:50) 15:36:12 pytest -q transportpce_tests/pce/test02_pce_400G.py 15:36:29 ........$ ssh-agent -k 15:36:49 unset SSH_AUTH_SOCK; 15:36:49 unset SSH_AGENT_PID; 15:36:49 echo Agent pid 1569 killed; 15:36:49 [ssh-agent] Stopped. 15:36:49 Build was aborted 15:36:49 Aborted by new patch set. 15:36:49 [PostBuildScript] - [INFO] Executing post build scripts. 15:36:49 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins6434104568812743490.sh 15:36:49 ---> sysstat.sh 15:36:50 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins12139959582548889632.sh 15:36:50 .---> package-listing.sh 15:36:50 ++ facter osfamily 15:36:50 ++ tr '[:upper:]' '[:lower:]' 15:36:50 + OS_FAMILY=debian 15:36:50 + workspace=/w/workspace/transportpce-tox-verify-transportpce-master 15:36:50 + START_PACKAGES=/tmp/packages_start.txt 15:36:50 + END_PACKAGES=/tmp/packages_end.txt 15:36:50 + DIFF_PACKAGES=/tmp/packages_diff.txt 15:36:50 + PACKAGES=/tmp/packages_start.txt 15:36:50 + '[' /w/workspace/transportpce-tox-verify-transportpce-master ']' 15:36:50 + PACKAGES=/tmp/packages_end.txt 15:36:50 + case "${OS_FAMILY}" in 15:36:50 + dpkg -l 15:36:50 + grep '^ii' 15:36:50 + '[' -f /tmp/packages_start.txt ']' 15:36:50 + '[' -f /tmp/packages_end.txt ']' 15:36:50 + diff /tmp/packages_start.txt /tmp/packages_end.txt 15:36:50 + '[' /w/workspace/transportpce-tox-verify-transportpce-master ']' 15:36:50 + mkdir -p /w/workspace/transportpce-tox-verify-transportpce-master/archives/ 15:36:50 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/transportpce-tox-verify-transportpce-master/archives/ 15:36:50 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins12931156298485461619.sh 15:36:50 ---> capture-instance-metadata.sh 15:36:50 Setup pyenv: 15:36:50 system 15:36:50 3.8.20 15:36:50 3.9.20 15:36:50 3.10.15 15:36:50 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 15:36:50 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-KPUl from file:/tmp/.os_lf_venv 15:36:50 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 15:36:50 lf-activate-venv(): INFO: Attempting to install with network-safe options... 15:36:52 Flf-activate-venv(): INFO: Base packages installed successfully 15:36:52 lf-activate-venv(): INFO: Installing additional packages: lftools 15:36:53 FFE [100%] 15:36:54 ==================================== ERRORS ==================================== 15:36:54 _ ERROR at teardown of TestTransportPCEPce400g.test_12_path_computation_400G_xpdr_bi_cfg _ 15:36:54 15:36:54 self = 15:36:54 15:36:54 def _new_conn(self) -> socket.socket: 15:36:54 """Establish a socket connection and set nodelay settings on it. 15:36:54 15:36:54 :return: New socket connection. 15:36:54 """ 15:36:54 try: 15:36:54 > sock = connection.create_connection( 15:36:54 (self._dns_host, self.port), 15:36:54 self.timeout, 15:36:54 source_address=self.source_address, 15:36:54 socket_options=self.socket_options, 15:36:54 ) 15:36:54 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/connection.py:204: 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 15:36:54 raise err 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 15:36:54 address = ('localhost', 8181), timeout = 30, source_address = None 15:36:54 socket_options = [(6, 1, 1)] 15:36:54 15:36:54 def create_connection( 15:36:54 address: tuple[str, int], 15:36:54 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 15:36:54 source_address: tuple[str, int] | None = None, 15:36:54 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 15:36:54 ) -> socket.socket: 15:36:54 """Connect to *address* and return the socket object. 15:36:54 15:36:54 Convenience function. Connect to *address* (a 2-tuple ``(host, 15:36:54 port)``) and return the socket object. Passing the optional 15:36:54 *timeout* parameter will set the timeout on the socket instance 15:36:54 before attempting to connect. If no *timeout* is supplied, the 15:36:54 global default timeout setting returned by :func:`socket.getdefaulttimeout` 15:36:54 is used. If *source_address* is set it must be a tuple of (host, port) 15:36:54 for the socket to bind as a source address before making the connection. 15:36:54 An host of '' or port 0 tells the OS to use the default. 15:36:54 """ 15:36:54 15:36:54 host, port = address 15:36:54 if host.startswith("["): 15:36:54 host = host.strip("[]") 15:36:54 err = None 15:36:54 15:36:54 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 15:36:54 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 15:36:54 # The original create_connection function always returns all records. 15:36:54 family = allowed_gai_family() 15:36:54 15:36:54 try: 15:36:54 host.encode("idna") 15:36:54 except UnicodeError: 15:36:54 raise LocationParseError(f"'{host}', label empty or too long") from None 15:36:54 15:36:54 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 15:36:54 af, socktype, proto, canonname, sa = res 15:36:54 sock = None 15:36:54 try: 15:36:54 sock = socket.socket(af, socktype, proto) 15:36:54 15:36:54 # If provided, set socket level options before connecting. 15:36:54 _set_socket_options(sock, socket_options) 15:36:54 15:36:54 if timeout is not _DEFAULT_TIMEOUT: 15:36:54 sock.settimeout(timeout) 15:36:54 if source_address: 15:36:54 sock.bind(source_address) 15:36:54 > sock.connect(sa) 15:36:54 E ConnectionRefusedError: [Errno 111] Connection refused 15:36:54 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 15:36:54 15:36:54 The above exception was the direct cause of the following exception: 15:36:54 15:36:54 self = 15:36:54 method = 'DELETE', url = '/rests/data/transportpce-portmapping:network' 15:36:54 body = None 15:36:54 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 15:36:54 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 15:36:54 redirect = False, assert_same_host = False 15:36:54 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 15:36:54 release_conn = False, chunked = False, body_pos = None, preload_content = False 15:36:54 decode_content = False, response_kw = {} 15:36:54 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network', query=None, fragment=None) 15:36:54 destination_scheme = None, conn = None, release_this_conn = True 15:36:54 http_tunnel_required = False, err = None, clean_exit = False 15:36:54 15:36:54 def urlopen( # type: ignore[override] 15:36:54 self, 15:36:54 method: str, 15:36:54 url: str, 15:36:54 body: _TYPE_BODY | None = None, 15:36:54 headers: typing.Mapping[str, str] | None = None, 15:36:54 retries: Retry | bool | int | None = None, 15:36:54 redirect: bool = True, 15:36:54 assert_same_host: bool = True, 15:36:54 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 15:36:54 pool_timeout: int | None = None, 15:36:54 release_conn: bool | None = None, 15:36:54 chunked: bool = False, 15:36:54 body_pos: _TYPE_BODY_POSITION | None = None, 15:36:54 preload_content: bool = True, 15:36:54 decode_content: bool = True, 15:36:54 **response_kw: typing.Any, 15:36:54 ) -> BaseHTTPResponse: 15:36:54 """ 15:36:54 Get a connection from the pool and perform an HTTP request. This is the 15:36:54 lowest level call for making a request, so you'll need to specify all 15:36:54 the raw details. 15:36:54 15:36:54 .. note:: 15:36:54 15:36:54 More commonly, it's appropriate to use a convenience method 15:36:54 such as :meth:`request`. 15:36:54 15:36:54 .. note:: 15:36:54 15:36:54 `release_conn` will only behave as expected if 15:36:54 `preload_content=False` because we want to make 15:36:54 `preload_content=False` the default behaviour someday soon without 15:36:54 breaking backwards compatibility. 15:36:54 15:36:54 :param method: 15:36:54 HTTP request method (such as GET, POST, PUT, etc.) 15:36:54 15:36:54 :param url: 15:36:54 The URL to perform the request on. 15:36:54 15:36:54 :param body: 15:36:54 Data to send in the request body, either :class:`str`, :class:`bytes`, 15:36:54 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 15:36:54 15:36:54 :param headers: 15:36:54 Dictionary of custom headers to send, such as User-Agent, 15:36:54 If-None-Match, etc. If None, pool headers are used. If provided, 15:36:54 these headers completely replace any pool-specific headers. 15:36:54 15:36:54 :param retries: 15:36:54 Configure the number of retries to allow before raising a 15:36:54 :class:`~urllib3.exceptions.MaxRetryError` exception. 15:36:54 15:36:54 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 15:36:54 :class:`~urllib3.util.retry.Retry` object for fine-grained control 15:36:54 over different types of retries. 15:36:54 Pass an integer number to retry connection errors that many times, 15:36:54 but no other types of errors. Pass zero to never retry. 15:36:54 15:36:54 If ``False``, then retries are disabled and any exception is raised 15:36:54 immediately. Also, instead of raising a MaxRetryError on redirects, 15:36:54 the redirect response will be returned. 15:36:54 15:36:54 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 15:36:54 15:36:54 :param redirect: 15:36:54 If True, automatically handle redirects (status codes 301, 302, 15:36:54 303, 307, 308). Each redirect counts as a retry. Disabling retries 15:36:54 will disable redirect, too. 15:36:54 15:36:54 :param assert_same_host: 15:36:54 If ``True``, will make sure that the host of the pool requests is 15:36:54 consistent else will raise HostChangedError. When ``False``, you can 15:36:54 use the pool on an HTTP proxy and request foreign hosts. 15:36:54 15:36:54 :param timeout: 15:36:54 If specified, overrides the default timeout for this one 15:36:54 request. It may be a float (in seconds) or an instance of 15:36:54 :class:`urllib3.util.Timeout`. 15:36:54 15:36:54 :param pool_timeout: 15:36:54 If set and the pool is set to block=True, then this method will 15:36:54 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 15:36:54 connection is available within the time period. 15:36:54 15:36:54 :param bool preload_content: 15:36:54 If True, the response's body will be preloaded into memory. 15:36:54 15:36:54 :param bool decode_content: 15:36:54 If True, will attempt to decode the body based on the 15:36:54 'content-encoding' header. 15:36:54 15:36:54 :param release_conn: 15:36:54 If False, then the urlopen call will not release the connection 15:36:54 back into the pool once a response is received (but will release if 15:36:54 you read the entire contents of the response such as when 15:36:54 `preload_content=True`). This is useful if you're not preloading 15:36:54 the response's content immediately. You will need to call 15:36:54 ``r.release_conn()`` on the response ``r`` to return the connection 15:36:54 back into the pool. If None, it takes the value of ``preload_content`` 15:36:54 which defaults to ``True``. 15:36:54 15:36:54 :param bool chunked: 15:36:54 If True, urllib3 will send the body using chunked transfer 15:36:54 encoding. Otherwise, urllib3 will send the body using the standard 15:36:54 content-length form. Defaults to False. 15:36:54 15:36:54 :param int body_pos: 15:36:54 Position to seek to in file-like body in the event of a retry or 15:36:54 redirect. Typically this won't need to be set because urllib3 will 15:36:54 auto-populate the value when needed. 15:36:54 """ 15:36:54 parsed_url = parse_url(url) 15:36:54 destination_scheme = parsed_url.scheme 15:36:54 15:36:54 if headers is None: 15:36:54 headers = self.headers 15:36:54 15:36:54 if not isinstance(retries, Retry): 15:36:54 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 15:36:54 15:36:54 if release_conn is None: 15:36:54 release_conn = preload_content 15:36:54 15:36:54 # Check host 15:36:54 if assert_same_host and not self.is_same_host(url): 15:36:54 raise HostChangedError(self, url, retries) 15:36:54 15:36:54 # Ensure that the URL we're connecting to is properly encoded 15:36:54 if url.startswith("/"): 15:36:54 url = to_str(_encode_target(url)) 15:36:54 else: 15:36:54 url = to_str(parsed_url.url) 15:36:54 15:36:54 conn = None 15:36:54 15:36:54 # Track whether `conn` needs to be released before 15:36:54 # returning/raising/recursing. Update this variable if necessary, and 15:36:54 # leave `release_conn` constant throughout the function. That way, if 15:36:54 # the function recurses, the original value of `release_conn` will be 15:36:54 # passed down into the recursive call, and its value will be respected. 15:36:54 # 15:36:54 # See issue #651 [1] for details. 15:36:54 # 15:36:54 # [1] 15:36:54 release_this_conn = release_conn 15:36:54 15:36:54 http_tunnel_required = connection_requires_http_tunnel( 15:36:54 self.proxy, self.proxy_config, destination_scheme 15:36:54 ) 15:36:54 15:36:54 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 15:36:54 # have to copy the headers dict so we can safely change it without those 15:36:54 # changes being reflected in anyone else's copy. 15:36:54 if not http_tunnel_required: 15:36:54 headers = headers.copy() # type: ignore[attr-defined] 15:36:54 headers.update(self.proxy_headers) # type: ignore[union-attr] 15:36:54 15:36:54 # Must keep the exception bound to a separate variable or else Python 3 15:36:54 # complains about UnboundLocalError. 15:36:54 err = None 15:36:54 15:36:54 # Keep track of whether we cleanly exited the except block. This 15:36:54 # ensures we do proper cleanup in finally. 15:36:54 clean_exit = False 15:36:54 15:36:54 # Rewind body position, if needed. Record current position 15:36:54 # for future rewinds in the event of a redirect/retry. 15:36:54 body_pos = set_file_position(body, body_pos) 15:36:54 15:36:54 try: 15:36:54 # Request a connection from the queue. 15:36:54 timeout_obj = self._get_timeout(timeout) 15:36:54 conn = self._get_conn(timeout=pool_timeout) 15:36:54 15:36:54 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 15:36:54 15:36:54 # Is this a closed/new connection that requires CONNECT tunnelling? 15:36:54 if self.proxy is not None and http_tunnel_required and conn.is_closed: 15:36:54 try: 15:36:54 self._prepare_proxy(conn) 15:36:54 except (BaseSSLError, OSError, SocketTimeout) as e: 15:36:54 self._raise_timeout( 15:36:54 err=e, url=self.proxy.url, timeout_value=conn.timeout 15:36:54 ) 15:36:54 raise 15:36:54 15:36:54 # If we're going to release the connection in ``finally:``, then 15:36:54 # the response doesn't need to know about the connection. Otherwise 15:36:54 # it will also try to release it and we'll have a double-release 15:36:54 # mess. 15:36:54 response_conn = conn if not release_conn else None 15:36:54 15:36:54 # Make the request on the HTTPConnection object 15:36:54 > response = self._make_request( 15:36:54 conn, 15:36:54 method, 15:36:54 url, 15:36:54 timeout=timeout_obj, 15:36:54 body=body, 15:36:54 headers=headers, 15:36:54 chunked=chunked, 15:36:54 retries=retries, 15:36:54 response_conn=response_conn, 15:36:54 preload_content=preload_content, 15:36:54 decode_content=decode_content, 15:36:54 **response_kw, 15:36:54 ) 15:36:54 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 15:36:54 conn.request( 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/connection.py:500: in request 15:36:54 self.endheaders() 15:36:54 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 15:36:54 self._send_output(message_body, encode_chunked=encode_chunked) 15:36:54 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 15:36:54 self.send(msg) 15:36:54 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 15:36:54 self.connect() 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 15:36:54 self.sock = self._new_conn() 15:36:54 ^^^^^^^^^^^^^^^^ 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 15:36:54 self = 15:36:54 15:36:54 def _new_conn(self) -> socket.socket: 15:36:54 """Establish a socket connection and set nodelay settings on it. 15:36:54 15:36:54 :return: New socket connection. 15:36:54 """ 15:36:54 try: 15:36:54 sock = connection.create_connection( 15:36:54 (self._dns_host, self.port), 15:36:54 self.timeout, 15:36:54 source_address=self.source_address, 15:36:54 socket_options=self.socket_options, 15:36:54 ) 15:36:54 except socket.gaierror as e: 15:36:54 raise NameResolutionError(self.host, self, e) from e 15:36:54 except SocketTimeout as e: 15:36:54 raise ConnectTimeoutError( 15:36:54 self, 15:36:54 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 15:36:54 ) from e 15:36:54 15:36:54 except OSError as e: 15:36:54 > raise NewConnectionError( 15:36:54 self, f"Failed to establish a new connection: {e}" 15:36:54 ) from e 15:36:54 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8181): Failed to establish a new connection: [Errno 111] Connection refused 15:36:54 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 15:36:54 15:36:54 The above exception was the direct cause of the following exception: 15:36:54 15:36:54 self = 15:36:54 request = , stream = False 15:36:54 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 15:36:54 proxies = OrderedDict() 15:36:54 15:36:54 def send( 15:36:54 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 15:36:54 ): 15:36:54 """Sends PreparedRequest object. Returns Response object. 15:36:54 15:36:54 :param request: The :class:`PreparedRequest ` being sent. 15:36:54 :param stream: (optional) Whether to stream the request content. 15:36:54 :param timeout: (optional) How long to wait for the server to send 15:36:54 data before giving up, as a float, or a :ref:`(connect timeout, 15:36:54 read timeout) ` tuple. 15:36:54 :type timeout: float or tuple or urllib3 Timeout object 15:36:54 :param verify: (optional) Either a boolean, in which case it controls whether 15:36:54 we verify the server's TLS certificate, or a string, in which case it 15:36:54 must be a path to a CA bundle to use 15:36:54 :param cert: (optional) Any user-provided SSL certificate to be trusted. 15:36:54 :param proxies: (optional) The proxies dictionary to apply to the request. 15:36:54 :rtype: requests.Response 15:36:54 """ 15:36:54 15:36:54 try: 15:36:54 conn = self.get_connection_with_tls_context( 15:36:54 request, verify, proxies=proxies, cert=cert 15:36:54 ) 15:36:54 except LocationValueError as e: 15:36:54 raise InvalidURL(e, request=request) 15:36:54 15:36:54 self.cert_verify(conn, request.url, verify, cert) 15:36:54 url = self.request_url(request, proxies) 15:36:54 self.add_headers( 15:36:54 request, 15:36:54 stream=stream, 15:36:54 timeout=timeout, 15:36:54 verify=verify, 15:36:54 cert=cert, 15:36:54 proxies=proxies, 15:36:54 ) 15:36:54 15:36:54 chunked = not (request.body is None or "Content-Length" in request.headers) 15:36:54 15:36:54 if isinstance(timeout, tuple): 15:36:54 try: 15:36:54 connect, read = timeout 15:36:54 timeout = TimeoutSauce(connect=connect, read=read) 15:36:54 except ValueError: 15:36:54 raise ValueError( 15:36:54 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 15:36:54 f"or a single float to set both timeouts to the same value." 15:36:54 ) 15:36:54 elif isinstance(timeout, TimeoutSauce): 15:36:54 pass 15:36:54 else: 15:36:54 timeout = TimeoutSauce(connect=timeout, read=timeout) 15:36:54 15:36:54 try: 15:36:54 > resp = conn.urlopen( 15:36:54 method=request.method, 15:36:54 url=url, 15:36:54 body=request.body, 15:36:54 headers=request.headers, 15:36:54 redirect=False, 15:36:54 assert_same_host=False, 15:36:54 preload_content=False, 15:36:54 decode_content=False, 15:36:54 retries=self.max_retries, 15:36:54 timeout=timeout, 15:36:54 chunked=chunked, 15:36:54 ) 15:36:54 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/requests/adapters.py:644: 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 15:36:54 retries = retries.increment( 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 15:36:54 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 15:36:54 method = 'DELETE', url = '/rests/data/transportpce-portmapping:network' 15:36:54 response = None 15:36:54 error = NewConnectionError("HTTPConnection(host='localhost', port=8181): Failed to establish a new connection: [Errno 111] Connection refused") 15:36:54 _pool = 15:36:54 _stacktrace = 15:36:54 15:36:54 def increment( 15:36:54 self, 15:36:54 method: str | None = None, 15:36:54 url: str | None = None, 15:36:54 response: BaseHTTPResponse | None = None, 15:36:54 error: Exception | None = None, 15:36:54 _pool: ConnectionPool | None = None, 15:36:54 _stacktrace: TracebackType | None = None, 15:36:54 ) -> Self: 15:36:54 """Return a new Retry object with incremented retry counters. 15:36:54 15:36:54 :param response: A response object, or None, if the server did not 15:36:54 return a response. 15:36:54 :type response: :class:`~urllib3.response.BaseHTTPResponse` 15:36:54 :param Exception error: An error encountered during the request, or 15:36:54 None if the response was received successfully. 15:36:54 15:36:54 :return: A new ``Retry`` object. 15:36:54 """ 15:36:54 if self.total is False and error: 15:36:54 # Disabled, indicate to re-raise the error. 15:36:54 raise reraise(type(error), error, _stacktrace) 15:36:54 15:36:54 total = self.total 15:36:54 if total is not None: 15:36:54 total -= 1 15:36:54 15:36:54 connect = self.connect 15:36:54 read = self.read 15:36:54 redirect = self.redirect 15:36:54 status_count = self.status 15:36:54 other = self.other 15:36:54 cause = "unknown" 15:36:54 status = None 15:36:54 redirect_location = None 15:36:54 15:36:54 if error and self._is_connection_error(error): 15:36:54 # Connect retry? 15:36:54 if connect is False: 15:36:54 raise reraise(type(error), error, _stacktrace) 15:36:54 elif connect is not None: 15:36:54 connect -= 1 15:36:54 15:36:54 elif error and self._is_read_error(error): 15:36:54 # Read retry? 15:36:54 if read is False or method is None or not self._is_method_retryable(method): 15:36:54 raise reraise(type(error), error, _stacktrace) 15:36:54 elif read is not None: 15:36:54 read -= 1 15:36:54 15:36:54 elif error: 15:36:54 # Other retry? 15:36:54 if other is not None: 15:36:54 other -= 1 15:36:54 15:36:54 elif response and response.get_redirect_location(): 15:36:54 # Redirect retry? 15:36:54 if redirect is not None: 15:36:54 redirect -= 1 15:36:54 cause = "too many redirects" 15:36:54 response_redirect_location = response.get_redirect_location() 15:36:54 if response_redirect_location: 15:36:54 redirect_location = response_redirect_location 15:36:54 status = response.status 15:36:54 15:36:54 else: 15:36:54 # Incrementing because of a server error like a 500 in 15:36:54 # status_forcelist and the given method is in the allowed_methods 15:36:54 cause = ResponseError.GENERIC_ERROR 15:36:54 if response and response.status: 15:36:54 if status_count is not None: 15:36:54 status_count -= 1 15:36:54 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 15:36:54 status = response.status 15:36:54 15:36:54 history = self.history + ( 15:36:54 RequestHistory(method, url, error, status, redirect_location), 15:36:54 ) 15:36:54 15:36:54 new_retry = self.new( 15:36:54 total=total, 15:36:54 connect=connect, 15:36:54 read=read, 15:36:54 redirect=redirect, 15:36:54 status=status_count, 15:36:54 other=other, 15:36:54 history=history, 15:36:54 ) 15:36:54 15:36:54 if new_retry.is_exhausted(): 15:36:54 reason = error or ResponseError(cause) 15:36:54 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 15:36:54 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 15:36:54 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8181): Max retries exceeded with url: /rests/data/transportpce-portmapping:network (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8181): Failed to establish a new connection: [Errno 111] Connection refused")) 15:36:54 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 15:36:54 15:36:54 During handling of the above exception, another exception occurred: 15:36:54 15:36:54 cls = 15:36:54 15:36:54 @classmethod 15:36:54 def tearDownClass(cls): 15:36:54 # clean datastores 15:36:54 > test_utils.del_portmapping() 15:36:54 15:36:54 transportpce_tests/pce/test02_pce_400G.py:111: 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 transportpce_tests/common/test_utils.py:490: in del_portmapping 15:36:54 response = delete_request(url[RESTCONF_VERSION].format('{}')) 15:36:54 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 15:36:54 transportpce_tests/common/test_utils.py:134: in delete_request 15:36:54 return requests.request( 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/requests/api.py:59: in request 15:36:54 return session.request(method=method, url=url, **kwargs) 15:36:54 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/requests/sessions.py:589: in request 15:36:54 resp = self.send(prep, **send_kwargs) 15:36:54 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/requests/sessions.py:703: in send 15:36:54 r = adapter.send(request, **kwargs) 15:36:54 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 15:36:54 self = 15:36:54 request = , stream = False 15:36:54 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 15:36:54 proxies = OrderedDict() 15:36:54 15:36:54 def send( 15:36:54 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 15:36:54 ): 15:36:54 """Sends PreparedRequest object. Returns Response object. 15:36:54 15:36:54 :param request: The :class:`PreparedRequest ` being sent. 15:36:54 :param stream: (optional) Whether to stream the request content. 15:36:54 :param timeout: (optional) How long to wait for the server to send 15:36:54 data before giving up, as a float, or a :ref:`(connect timeout, 15:36:54 read timeout) ` tuple. 15:36:54 :type timeout: float or tuple or urllib3 Timeout object 15:36:54 :param verify: (optional) Either a boolean, in which case it controls whether 15:36:54 we verify the server's TLS certificate, or a string, in which case it 15:36:54 must be a path to a CA bundle to use 15:36:54 :param cert: (optional) Any user-provided SSL certificate to be trusted. 15:36:54 :param proxies: (optional) The proxies dictionary to apply to the request. 15:36:54 :rtype: requests.Response 15:36:54 """ 15:36:54 15:36:54 try: 15:36:54 conn = self.get_connection_with_tls_context( 15:36:54 request, verify, proxies=proxies, cert=cert 15:36:54 ) 15:36:54 except LocationValueError as e: 15:36:54 raise InvalidURL(e, request=request) 15:36:54 15:36:54 self.cert_verify(conn, request.url, verify, cert) 15:36:54 url = self.request_url(request, proxies) 15:36:54 self.add_headers( 15:36:54 request, 15:36:54 stream=stream, 15:36:54 timeout=timeout, 15:36:54 verify=verify, 15:36:54 cert=cert, 15:36:54 proxies=proxies, 15:36:54 ) 15:36:54 15:36:54 chunked = not (request.body is None or "Content-Length" in request.headers) 15:36:54 15:36:54 if isinstance(timeout, tuple): 15:36:54 try: 15:36:54 connect, read = timeout 15:36:54 timeout = TimeoutSauce(connect=connect, read=read) 15:36:54 except ValueError: 15:36:54 raise ValueError( 15:36:54 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 15:36:54 f"or a single float to set both timeouts to the same value." 15:36:54 ) 15:36:54 elif isinstance(timeout, TimeoutSauce): 15:36:54 pass 15:36:54 else: 15:36:54 timeout = TimeoutSauce(connect=timeout, read=timeout) 15:36:54 15:36:54 try: 15:36:54 resp = conn.urlopen( 15:36:54 method=request.method, 15:36:54 url=url, 15:36:54 body=request.body, 15:36:54 headers=request.headers, 15:36:54 redirect=False, 15:36:54 assert_same_host=False, 15:36:54 preload_content=False, 15:36:54 decode_content=False, 15:36:54 retries=self.max_retries, 15:36:54 timeout=timeout, 15:36:54 chunked=chunked, 15:36:54 ) 15:36:54 15:36:54 except (ProtocolError, OSError) as err: 15:36:54 raise ConnectionError(err, request=request) 15:36:54 15:36:54 except MaxRetryError as e: 15:36:54 if isinstance(e.reason, ConnectTimeoutError): 15:36:54 # TODO: Remove this in 3.0.0: see #2811 15:36:54 if not isinstance(e.reason, NewConnectionError): 15:36:54 raise ConnectTimeout(e, request=request) 15:36:54 15:36:54 if isinstance(e.reason, ResponseError): 15:36:54 raise RetryError(e, request=request) 15:36:54 15:36:54 if isinstance(e.reason, _ProxyError): 15:36:54 raise ProxyError(e, request=request) 15:36:54 15:36:54 if isinstance(e.reason, _SSLError): 15:36:54 # This branch is for urllib3 v1.22 and later. 15:36:54 raise SSLError(e, request=request) 15:36:54 15:36:54 > raise ConnectionError(e, request=request) 15:36:54 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8181): Max retries exceeded with url: /rests/data/transportpce-portmapping:network (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8181): Failed to establish a new connection: [Errno 111] Connection refused")) 15:36:54 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 15:36:54 ----------------------------- Captured stdout call ----------------------------- 15:36:54 execution of test_12_path_computation_400G_xpdr_bi_cfg 15:36:54 =================================== FAILURES =================================== 15:36:54 ____________ TestTransportPCEPce400g.test_10_load_port_mapping_cfg _____________ 15:36:54 15:36:54 self = 15:36:54 15:36:54 def _new_conn(self) -> socket.socket: 15:36:54 """Establish a socket connection and set nodelay settings on it. 15:36:54 15:36:54 :return: New socket connection. 15:36:54 """ 15:36:54 try: 15:36:54 > sock = connection.create_connection( 15:36:54 (self._dns_host, self.port), 15:36:54 self.timeout, 15:36:54 source_address=self.source_address, 15:36:54 socket_options=self.socket_options, 15:36:54 ) 15:36:54 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/connection.py:204: 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 15:36:54 raise err 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 15:36:54 address = ('localhost', 8181), timeout = 30, source_address = None 15:36:54 socket_options = [(6, 1, 1)] 15:36:54 15:36:54 def create_connection( 15:36:54 address: tuple[str, int], 15:36:54 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 15:36:54 source_address: tuple[str, int] | None = None, 15:36:54 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 15:36:54 ) -> socket.socket: 15:36:54 """Connect to *address* and return the socket object. 15:36:54 15:36:54 Convenience function. Connect to *address* (a 2-tuple ``(host, 15:36:54 port)``) and return the socket object. Passing the optional 15:36:54 *timeout* parameter will set the timeout on the socket instance 15:36:54 before attempting to connect. If no *timeout* is supplied, the 15:36:54 global default timeout setting returned by :func:`socket.getdefaulttimeout` 15:36:54 is used. If *source_address* is set it must be a tuple of (host, port) 15:36:54 for the socket to bind as a source address before making the connection. 15:36:54 An host of '' or port 0 tells the OS to use the default. 15:36:54 """ 15:36:54 15:36:54 host, port = address 15:36:54 if host.startswith("["): 15:36:54 host = host.strip("[]") 15:36:54 err = None 15:36:54 15:36:54 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 15:36:54 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 15:36:54 # The original create_connection function always returns all records. 15:36:54 family = allowed_gai_family() 15:36:54 15:36:54 try: 15:36:54 host.encode("idna") 15:36:54 except UnicodeError: 15:36:54 raise LocationParseError(f"'{host}', label empty or too long") from None 15:36:54 15:36:54 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 15:36:54 af, socktype, proto, canonname, sa = res 15:36:54 sock = None 15:36:54 try: 15:36:54 sock = socket.socket(af, socktype, proto) 15:36:54 15:36:54 # If provided, set socket level options before connecting. 15:36:54 _set_socket_options(sock, socket_options) 15:36:54 15:36:54 if timeout is not _DEFAULT_TIMEOUT: 15:36:54 sock.settimeout(timeout) 15:36:54 if source_address: 15:36:54 sock.bind(source_address) 15:36:54 > sock.connect(sa) 15:36:54 E ConnectionRefusedError: [Errno 111] Connection refused 15:36:54 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 15:36:54 15:36:54 The above exception was the direct cause of the following exception: 15:36:54 15:36:54 self = 15:36:54 method = 'POST', url = '/rests/data/transportpce-portmapping:network' 15:36:54 body = '{"nodes": [{"node-id": "XPDR-A2", "node-info": {"node-clli": "NodeA", "node-vendor": "vendorA", "openroadm-version": ... 50}, {"mc-node-name": "SRG1-PP", "center-freq-granularity": 6.25, "slot-width-granularity": 12.5, "max-slots": 8}]}]}' 15:36:54 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '11141', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 15:36:54 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 15:36:54 redirect = False, assert_same_host = False 15:36:54 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 15:36:54 release_conn = False, chunked = False, body_pos = None, preload_content = False 15:36:54 decode_content = False, response_kw = {} 15:36:54 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network', query=None, fragment=None) 15:36:54 destination_scheme = None, conn = None, release_this_conn = True 15:36:54 http_tunnel_required = False, err = None, clean_exit = False 15:36:54 15:36:54 def urlopen( # type: ignore[override] 15:36:54 self, 15:36:54 method: str, 15:36:54 url: str, 15:36:54 body: _TYPE_BODY | None = None, 15:36:54 headers: typing.Mapping[str, str] | None = None, 15:36:54 retries: Retry | bool | int | None = None, 15:36:54 redirect: bool = True, 15:36:54 assert_same_host: bool = True, 15:36:54 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 15:36:54 pool_timeout: int | None = None, 15:36:54 release_conn: bool | None = None, 15:36:54 chunked: bool = False, 15:36:54 body_pos: _TYPE_BODY_POSITION | None = None, 15:36:54 preload_content: bool = True, 15:36:54 decode_content: bool = True, 15:36:54 **response_kw: typing.Any, 15:36:54 ) -> BaseHTTPResponse: 15:36:54 """ 15:36:54 Get a connection from the pool and perform an HTTP request. This is the 15:36:54 lowest level call for making a request, so you'll need to specify all 15:36:54 the raw details. 15:36:54 15:36:54 .. note:: 15:36:54 15:36:54 More commonly, it's appropriate to use a convenience method 15:36:54 such as :meth:`request`. 15:36:54 15:36:54 .. note:: 15:36:54 15:36:54 `release_conn` will only behave as expected if 15:36:54 `preload_content=False` because we want to make 15:36:54 `preload_content=False` the default behaviour someday soon without 15:36:54 breaking backwards compatibility. 15:36:54 15:36:54 :param method: 15:36:54 HTTP request method (such as GET, POST, PUT, etc.) 15:36:54 15:36:54 :param url: 15:36:54 The URL to perform the request on. 15:36:54 15:36:54 :param body: 15:36:54 Data to send in the request body, either :class:`str`, :class:`bytes`, 15:36:54 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 15:36:54 15:36:54 :param headers: 15:36:54 Dictionary of custom headers to send, such as User-Agent, 15:36:54 If-None-Match, etc. If None, pool headers are used. If provided, 15:36:54 these headers completely replace any pool-specific headers. 15:36:54 15:36:54 :param retries: 15:36:54 Configure the number of retries to allow before raising a 15:36:54 :class:`~urllib3.exceptions.MaxRetryError` exception. 15:36:54 15:36:54 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 15:36:54 :class:`~urllib3.util.retry.Retry` object for fine-grained control 15:36:54 over different types of retries. 15:36:54 Pass an integer number to retry connection errors that many times, 15:36:54 but no other types of errors. Pass zero to never retry. 15:36:54 15:36:54 If ``False``, then retries are disabled and any exception is raised 15:36:54 immediately. Also, instead of raising a MaxRetryError on redirects, 15:36:54 the redirect response will be returned. 15:36:54 15:36:54 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 15:36:54 15:36:54 :param redirect: 15:36:54 If True, automatically handle redirects (status codes 301, 302, 15:36:54 303, 307, 308). Each redirect counts as a retry. Disabling retries 15:36:54 will disable redirect, too. 15:36:54 15:36:54 :param assert_same_host: 15:36:54 If ``True``, will make sure that the host of the pool requests is 15:36:54 consistent else will raise HostChangedError. When ``False``, you can 15:36:54 use the pool on an HTTP proxy and request foreign hosts. 15:36:54 15:36:54 :param timeout: 15:36:54 If specified, overrides the default timeout for this one 15:36:54 request. It may be a float (in seconds) or an instance of 15:36:54 :class:`urllib3.util.Timeout`. 15:36:54 15:36:54 :param pool_timeout: 15:36:54 If set and the pool is set to block=True, then this method will 15:36:54 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 15:36:54 connection is available within the time period. 15:36:54 15:36:54 :param bool preload_content: 15:36:54 If True, the response's body will be preloaded into memory. 15:36:54 15:36:54 :param bool decode_content: 15:36:54 If True, will attempt to decode the body based on the 15:36:54 'content-encoding' header. 15:36:54 15:36:54 :param release_conn: 15:36:54 If False, then the urlopen call will not release the connection 15:36:54 back into the pool once a response is received (but will release if 15:36:54 you read the entire contents of the response such as when 15:36:54 `preload_content=True`). This is useful if you're not preloading 15:36:54 the response's content immediately. You will need to call 15:36:54 ``r.release_conn()`` on the response ``r`` to return the connection 15:36:54 back into the pool. If None, it takes the value of ``preload_content`` 15:36:54 which defaults to ``True``. 15:36:54 15:36:54 :param bool chunked: 15:36:54 If True, urllib3 will send the body using chunked transfer 15:36:54 encoding. Otherwise, urllib3 will send the body using the standard 15:36:54 content-length form. Defaults to False. 15:36:54 15:36:54 :param int body_pos: 15:36:54 Position to seek to in file-like body in the event of a retry or 15:36:54 redirect. Typically this won't need to be set because urllib3 will 15:36:54 auto-populate the value when needed. 15:36:54 """ 15:36:54 parsed_url = parse_url(url) 15:36:54 destination_scheme = parsed_url.scheme 15:36:54 15:36:54 if headers is None: 15:36:54 headers = self.headers 15:36:54 15:36:54 if not isinstance(retries, Retry): 15:36:54 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 15:36:54 15:36:54 if release_conn is None: 15:36:54 release_conn = preload_content 15:36:54 15:36:54 # Check host 15:36:54 if assert_same_host and not self.is_same_host(url): 15:36:54 raise HostChangedError(self, url, retries) 15:36:54 15:36:54 # Ensure that the URL we're connecting to is properly encoded 15:36:54 if url.startswith("/"): 15:36:54 url = to_str(_encode_target(url)) 15:36:54 else: 15:36:54 url = to_str(parsed_url.url) 15:36:54 15:36:54 conn = None 15:36:54 15:36:54 # Track whether `conn` needs to be released before 15:36:54 # returning/raising/recursing. Update this variable if necessary, and 15:36:54 # leave `release_conn` constant throughout the function. That way, if 15:36:54 # the function recurses, the original value of `release_conn` will be 15:36:54 # passed down into the recursive call, and its value will be respected. 15:36:54 # 15:36:54 # See issue #651 [1] for details. 15:36:54 # 15:36:54 # [1] 15:36:54 release_this_conn = release_conn 15:36:54 15:36:54 http_tunnel_required = connection_requires_http_tunnel( 15:36:54 self.proxy, self.proxy_config, destination_scheme 15:36:54 ) 15:36:54 15:36:54 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 15:36:54 # have to copy the headers dict so we can safely change it without those 15:36:54 # changes being reflected in anyone else's copy. 15:36:54 if not http_tunnel_required: 15:36:54 headers = headers.copy() # type: ignore[attr-defined] 15:36:54 headers.update(self.proxy_headers) # type: ignore[union-attr] 15:36:54 15:36:54 # Must keep the exception bound to a separate variable or else Python 3 15:36:54 # complains about UnboundLocalError. 15:36:54 err = None 15:36:54 15:36:54 # Keep track of whether we cleanly exited the except block. This 15:36:54 # ensures we do proper cleanup in finally. 15:36:54 clean_exit = False 15:36:54 15:36:54 # Rewind body position, if needed. Record current position 15:36:54 # for future rewinds in the event of a redirect/retry. 15:36:54 body_pos = set_file_position(body, body_pos) 15:36:54 15:36:54 try: 15:36:54 # Request a connection from the queue. 15:36:54 timeout_obj = self._get_timeout(timeout) 15:36:54 conn = self._get_conn(timeout=pool_timeout) 15:36:54 15:36:54 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 15:36:54 15:36:54 # Is this a closed/new connection that requires CONNECT tunnelling? 15:36:54 if self.proxy is not None and http_tunnel_required and conn.is_closed: 15:36:54 try: 15:36:54 self._prepare_proxy(conn) 15:36:54 except (BaseSSLError, OSError, SocketTimeout) as e: 15:36:54 self._raise_timeout( 15:36:54 err=e, url=self.proxy.url, timeout_value=conn.timeout 15:36:54 ) 15:36:54 raise 15:36:54 15:36:54 # If we're going to release the connection in ``finally:``, then 15:36:54 # the response doesn't need to know about the connection. Otherwise 15:36:54 # it will also try to release it and we'll have a double-release 15:36:54 # mess. 15:36:54 response_conn = conn if not release_conn else None 15:36:54 15:36:54 # Make the request on the HTTPConnection object 15:36:54 > response = self._make_request( 15:36:54 conn, 15:36:54 method, 15:36:54 url, 15:36:54 timeout=timeout_obj, 15:36:54 body=body, 15:36:54 headers=headers, 15:36:54 chunked=chunked, 15:36:54 retries=retries, 15:36:54 response_conn=response_conn, 15:36:54 preload_content=preload_content, 15:36:54 decode_content=decode_content, 15:36:54 **response_kw, 15:36:54 ) 15:36:54 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 15:36:54 conn.request( 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/connection.py:500: in request 15:36:54 self.endheaders() 15:36:54 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 15:36:54 self._send_output(message_body, encode_chunked=encode_chunked) 15:36:54 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 15:36:54 self.send(msg) 15:36:54 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 15:36:54 self.connect() 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 15:36:54 self.sock = self._new_conn() 15:36:54 ^^^^^^^^^^^^^^^^ 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 15:36:54 self = 15:36:54 15:36:54 def _new_conn(self) -> socket.socket: 15:36:54 """Establish a socket connection and set nodelay settings on it. 15:36:54 15:36:54 :return: New socket connection. 15:36:54 """ 15:36:54 try: 15:36:54 sock = connection.create_connection( 15:36:54 (self._dns_host, self.port), 15:36:54 self.timeout, 15:36:54 source_address=self.source_address, 15:36:54 socket_options=self.socket_options, 15:36:54 ) 15:36:54 except socket.gaierror as e: 15:36:54 raise NameResolutionError(self.host, self, e) from e 15:36:54 except SocketTimeout as e: 15:36:54 raise ConnectTimeoutError( 15:36:54 self, 15:36:54 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 15:36:54 ) from e 15:36:54 15:36:54 except OSError as e: 15:36:54 > raise NewConnectionError( 15:36:54 self, f"Failed to establish a new connection: {e}" 15:36:54 ) from e 15:36:54 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8181): Failed to establish a new connection: [Errno 111] Connection refused 15:36:54 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 15:36:54 15:36:54 The above exception was the direct cause of the following exception: 15:36:54 15:36:54 self = 15:36:54 request = , stream = False 15:36:54 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 15:36:54 proxies = OrderedDict() 15:36:54 15:36:54 def send( 15:36:54 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 15:36:54 ): 15:36:54 """Sends PreparedRequest object. Returns Response object. 15:36:54 15:36:54 :param request: The :class:`PreparedRequest ` being sent. 15:36:54 :param stream: (optional) Whether to stream the request content. 15:36:54 :param timeout: (optional) How long to wait for the server to send 15:36:54 data before giving up, as a float, or a :ref:`(connect timeout, 15:36:54 read timeout) ` tuple. 15:36:54 :type timeout: float or tuple or urllib3 Timeout object 15:36:54 :param verify: (optional) Either a boolean, in which case it controls whether 15:36:54 we verify the server's TLS certificate, or a string, in which case it 15:36:54 must be a path to a CA bundle to use 15:36:54 :param cert: (optional) Any user-provided SSL certificate to be trusted. 15:36:54 :param proxies: (optional) The proxies dictionary to apply to the request. 15:36:54 :rtype: requests.Response 15:36:54 """ 15:36:54 15:36:54 try: 15:36:54 conn = self.get_connection_with_tls_context( 15:36:54 request, verify, proxies=proxies, cert=cert 15:36:54 ) 15:36:54 except LocationValueError as e: 15:36:54 raise InvalidURL(e, request=request) 15:36:54 15:36:54 self.cert_verify(conn, request.url, verify, cert) 15:36:54 url = self.request_url(request, proxies) 15:36:54 self.add_headers( 15:36:54 request, 15:36:54 stream=stream, 15:36:54 timeout=timeout, 15:36:54 verify=verify, 15:36:54 cert=cert, 15:36:54 proxies=proxies, 15:36:54 ) 15:36:54 15:36:54 chunked = not (request.body is None or "Content-Length" in request.headers) 15:36:54 15:36:54 if isinstance(timeout, tuple): 15:36:54 try: 15:36:54 connect, read = timeout 15:36:54 timeout = TimeoutSauce(connect=connect, read=read) 15:36:54 except ValueError: 15:36:54 raise ValueError( 15:36:54 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 15:36:54 f"or a single float to set both timeouts to the same value." 15:36:54 ) 15:36:54 elif isinstance(timeout, TimeoutSauce): 15:36:54 pass 15:36:54 else: 15:36:54 timeout = TimeoutSauce(connect=timeout, read=timeout) 15:36:54 15:36:54 try: 15:36:54 > resp = conn.urlopen( 15:36:54 method=request.method, 15:36:54 url=url, 15:36:54 body=request.body, 15:36:54 headers=request.headers, 15:36:54 redirect=False, 15:36:54 assert_same_host=False, 15:36:54 preload_content=False, 15:36:54 decode_content=False, 15:36:54 retries=self.max_retries, 15:36:54 timeout=timeout, 15:36:54 chunked=chunked, 15:36:54 ) 15:36:54 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/requests/adapters.py:644: 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 15:36:54 retries = retries.increment( 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 15:36:54 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 15:36:54 method = 'POST', url = '/rests/data/transportpce-portmapping:network' 15:36:54 response = None 15:36:54 error = NewConnectionError("HTTPConnection(host='localhost', port=8181): Failed to establish a new connection: [Errno 111] Connection refused") 15:36:54 _pool = 15:36:54 _stacktrace = 15:36:54 15:36:54 def increment( 15:36:54 self, 15:36:54 method: str | None = None, 15:36:54 url: str | None = None, 15:36:54 response: BaseHTTPResponse | None = None, 15:36:54 error: Exception | None = None, 15:36:54 _pool: ConnectionPool | None = None, 15:36:54 _stacktrace: TracebackType | None = None, 15:36:54 ) -> Self: 15:36:54 """Return a new Retry object with incremented retry counters. 15:36:54 15:36:54 :param response: A response object, or None, if the server did not 15:36:54 return a response. 15:36:54 :type response: :class:`~urllib3.response.BaseHTTPResponse` 15:36:54 :param Exception error: An error encountered during the request, or 15:36:54 None if the response was received successfully. 15:36:54 15:36:54 :return: A new ``Retry`` object. 15:36:54 """ 15:36:54 if self.total is False and error: 15:36:54 # Disabled, indicate to re-raise the error. 15:36:54 raise reraise(type(error), error, _stacktrace) 15:36:54 15:36:54 total = self.total 15:36:54 if total is not None: 15:36:54 total -= 1 15:36:54 15:36:54 connect = self.connect 15:36:54 read = self.read 15:36:54 redirect = self.redirect 15:36:54 status_count = self.status 15:36:54 other = self.other 15:36:54 cause = "unknown" 15:36:54 status = None 15:36:54 redirect_location = None 15:36:54 15:36:54 if error and self._is_connection_error(error): 15:36:54 # Connect retry? 15:36:54 if connect is False: 15:36:54 raise reraise(type(error), error, _stacktrace) 15:36:54 elif connect is not None: 15:36:54 connect -= 1 15:36:54 15:36:54 elif error and self._is_read_error(error): 15:36:54 # Read retry? 15:36:54 if read is False or method is None or not self._is_method_retryable(method): 15:36:54 raise reraise(type(error), error, _stacktrace) 15:36:54 elif read is not None: 15:36:54 read -= 1 15:36:54 15:36:54 elif error: 15:36:54 # Other retry? 15:36:54 if other is not None: 15:36:54 other -= 1 15:36:54 15:36:54 elif response and response.get_redirect_location(): 15:36:54 # Redirect retry? 15:36:54 if redirect is not None: 15:36:54 redirect -= 1 15:36:54 cause = "too many redirects" 15:36:54 response_redirect_location = response.get_redirect_location() 15:36:54 if response_redirect_location: 15:36:54 redirect_location = response_redirect_location 15:36:54 status = response.status 15:36:54 15:36:54 else: 15:36:54 # Incrementing because of a server error like a 500 in 15:36:54 # status_forcelist and the given method is in the allowed_methods 15:36:54 cause = ResponseError.GENERIC_ERROR 15:36:54 if response and response.status: 15:36:54 if status_count is not None: 15:36:54 status_count -= 1 15:36:54 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 15:36:54 status = response.status 15:36:54 15:36:54 history = self.history + ( 15:36:54 RequestHistory(method, url, error, status, redirect_location), 15:36:54 ) 15:36:54 15:36:54 new_retry = self.new( 15:36:54 total=total, 15:36:54 connect=connect, 15:36:54 read=read, 15:36:54 redirect=redirect, 15:36:54 status=status_count, 15:36:54 other=other, 15:36:54 history=history, 15:36:54 ) 15:36:54 15:36:54 if new_retry.is_exhausted(): 15:36:54 reason = error or ResponseError(cause) 15:36:54 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 15:36:54 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 15:36:54 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8181): Max retries exceeded with url: /rests/data/transportpce-portmapping:network (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8181): Failed to establish a new connection: [Errno 111] Connection refused")) 15:36:54 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 15:36:54 15:36:54 During handling of the above exception, another exception occurred: 15:36:54 15:36:54 self = 15:36:54 15:36:54 def test_10_load_port_mapping_cfg(self): 15:36:54 test_utils.del_portmapping() 15:36:54 time.sleep(1) 15:36:54 > response = test_utils.post_portmapping(self.port_mapping_data_cfg) 15:36:54 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 15:36:54 15:36:54 transportpce_tests/pce/test02_pce_400G.py:321: 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 transportpce_tests/common/test_utils.py:483: in post_portmapping 15:36:54 response = post_request(url[RESTCONF_VERSION].format('{}'), json_payload) 15:36:54 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 15:36:54 transportpce_tests/common/test_utils.py:143: in post_request 15:36:54 return requests.request( 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/requests/api.py:59: in request 15:36:54 return session.request(method=method, url=url, **kwargs) 15:36:54 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/requests/sessions.py:589: in request 15:36:54 resp = self.send(prep, **send_kwargs) 15:36:54 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/requests/sessions.py:703: in send 15:36:54 r = adapter.send(request, **kwargs) 15:36:54 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 15:36:54 self = 15:36:54 request = , stream = False 15:36:54 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 15:36:54 proxies = OrderedDict() 15:36:54 15:36:54 def send( 15:36:54 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 15:36:54 ): 15:36:54 """Sends PreparedRequest object. Returns Response object. 15:36:54 15:36:54 :param request: The :class:`PreparedRequest ` being sent. 15:36:54 :param stream: (optional) Whether to stream the request content. 15:36:54 :param timeout: (optional) How long to wait for the server to send 15:36:54 data before giving up, as a float, or a :ref:`(connect timeout, 15:36:54 read timeout) ` tuple. 15:36:54 :type timeout: float or tuple or urllib3 Timeout object 15:36:54 :param verify: (optional) Either a boolean, in which case it controls whether 15:36:54 we verify the server's TLS certificate, or a string, in which case it 15:36:54 must be a path to a CA bundle to use 15:36:54 :param cert: (optional) Any user-provided SSL certificate to be trusted. 15:36:54 :param proxies: (optional) The proxies dictionary to apply to the request. 15:36:54 :rtype: requests.Response 15:36:54 """ 15:36:54 15:36:54 try: 15:36:54 conn = self.get_connection_with_tls_context( 15:36:54 request, verify, proxies=proxies, cert=cert 15:36:54 ) 15:36:54 except LocationValueError as e: 15:36:54 raise InvalidURL(e, request=request) 15:36:54 15:36:54 self.cert_verify(conn, request.url, verify, cert) 15:36:54 url = self.request_url(request, proxies) 15:36:54 self.add_headers( 15:36:54 request, 15:36:54 stream=stream, 15:36:54 timeout=timeout, 15:36:54 verify=verify, 15:36:54 cert=cert, 15:36:54 proxies=proxies, 15:36:54 ) 15:36:54 15:36:54 chunked = not (request.body is None or "Content-Length" in request.headers) 15:36:54 15:36:54 if isinstance(timeout, tuple): 15:36:54 try: 15:36:54 connect, read = timeout 15:36:54 timeout = TimeoutSauce(connect=connect, read=read) 15:36:54 except ValueError: 15:36:54 raise ValueError( 15:36:54 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 15:36:54 f"or a single float to set both timeouts to the same value." 15:36:54 ) 15:36:54 elif isinstance(timeout, TimeoutSauce): 15:36:54 pass 15:36:54 else: 15:36:54 timeout = TimeoutSauce(connect=timeout, read=timeout) 15:36:54 15:36:54 try: 15:36:54 resp = conn.urlopen( 15:36:54 method=request.method, 15:36:54 url=url, 15:36:54 body=request.body, 15:36:54 headers=request.headers, 15:36:54 redirect=False, 15:36:54 assert_same_host=False, 15:36:54 preload_content=False, 15:36:54 decode_content=False, 15:36:54 retries=self.max_retries, 15:36:54 timeout=timeout, 15:36:54 chunked=chunked, 15:36:54 ) 15:36:54 15:36:54 except (ProtocolError, OSError) as err: 15:36:54 raise ConnectionError(err, request=request) 15:36:54 15:36:54 except MaxRetryError as e: 15:36:54 if isinstance(e.reason, ConnectTimeoutError): 15:36:54 # TODO: Remove this in 3.0.0: see #2811 15:36:54 if not isinstance(e.reason, NewConnectionError): 15:36:54 raise ConnectTimeout(e, request=request) 15:36:54 15:36:54 if isinstance(e.reason, ResponseError): 15:36:54 raise RetryError(e, request=request) 15:36:54 15:36:54 if isinstance(e.reason, _ProxyError): 15:36:54 raise ProxyError(e, request=request) 15:36:54 15:36:54 if isinstance(e.reason, _SSLError): 15:36:54 # This branch is for urllib3 v1.22 and later. 15:36:54 raise SSLError(e, request=request) 15:36:54 15:36:54 > raise ConnectionError(e, request=request) 15:36:54 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8181): Max retries exceeded with url: /rests/data/transportpce-portmapping:network (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8181): Failed to establish a new connection: [Errno 111] Connection refused")) 15:36:54 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 15:36:54 ----------------------------- Captured stdout call ----------------------------- 15:36:54 execution of test_10_load_port_mapping_cfg 15:36:54 ________ TestTransportPCEPce400g.test_11_load_openroadm_topology_bi_cfg ________ 15:36:54 15:36:54 self = 15:36:54 15:36:54 def _new_conn(self) -> socket.socket: 15:36:54 """Establish a socket connection and set nodelay settings on it. 15:36:54 15:36:54 :return: New socket connection. 15:36:54 """ 15:36:54 try: 15:36:54 > sock = connection.create_connection( 15:36:54 (self._dns_host, self.port), 15:36:54 self.timeout, 15:36:54 source_address=self.source_address, 15:36:54 socket_options=self.socket_options, 15:36:54 ) 15:36:54 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/connection.py:204: 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 15:36:54 raise err 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 15:36:54 address = ('localhost', 8181), timeout = 30, source_address = None 15:36:54 socket_options = [(6, 1, 1)] 15:36:54 15:36:54 def create_connection( 15:36:54 address: tuple[str, int], 15:36:54 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 15:36:54 source_address: tuple[str, int] | None = None, 15:36:54 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 15:36:54 ) -> socket.socket: 15:36:54 """Connect to *address* and return the socket object. 15:36:54 15:36:54 Convenience function. Connect to *address* (a 2-tuple ``(host, 15:36:54 port)``) and return the socket object. Passing the optional 15:36:54 *timeout* parameter will set the timeout on the socket instance 15:36:54 before attempting to connect. If no *timeout* is supplied, the 15:36:54 global default timeout setting returned by :func:`socket.getdefaulttimeout` 15:36:54 is used. If *source_address* is set it must be a tuple of (host, port) 15:36:54 for the socket to bind as a source address before making the connection. 15:36:54 An host of '' or port 0 tells the OS to use the default. 15:36:54 """ 15:36:54 15:36:54 host, port = address 15:36:54 if host.startswith("["): 15:36:54 host = host.strip("[]") 15:36:54 err = None 15:36:54 15:36:54 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 15:36:54 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 15:36:54 # The original create_connection function always returns all records. 15:36:54 family = allowed_gai_family() 15:36:54 15:36:54 try: 15:36:54 host.encode("idna") 15:36:54 except UnicodeError: 15:36:54 raise LocationParseError(f"'{host}', label empty or too long") from None 15:36:54 15:36:54 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 15:36:54 af, socktype, proto, canonname, sa = res 15:36:54 sock = None 15:36:54 try: 15:36:54 sock = socket.socket(af, socktype, proto) 15:36:54 15:36:54 # If provided, set socket level options before connecting. 15:36:54 _set_socket_options(sock, socket_options) 15:36:54 15:36:54 if timeout is not _DEFAULT_TIMEOUT: 15:36:54 sock.settimeout(timeout) 15:36:54 if source_address: 15:36:54 sock.bind(source_address) 15:36:54 > sock.connect(sa) 15:36:54 E ConnectionRefusedError: [Errno 111] Connection refused 15:36:54 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 15:36:54 15:36:54 The above exception was the direct cause of the following exception: 15:36:54 15:36:54 self = 15:36:54 method = 'PUT' 15:36:54 url = '/rests/data/ietf-network:networks/network=openroadm-topology' 15:36:54 body = '{"network": [{"network-id": "openroadm-topology", "network-types": {"org-openroadm-common-network:openroadm-common-ne...ork:administrative-state": "inService", "destination": {"dest-tp": "DEG2-CTP-TXRX", "dest-node": "ROADM-A1-DEG2"}}]}]}' 15:36:54 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '27660', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 15:36:54 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 15:36:54 redirect = False, assert_same_host = False 15:36:54 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 15:36:54 release_conn = False, chunked = False, body_pos = None, preload_content = False 15:36:54 decode_content = False, response_kw = {} 15:36:54 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/ietf-network:networks/network=openroadm-topology', query=None, fragment=None) 15:36:54 destination_scheme = None, conn = None, release_this_conn = True 15:36:54 http_tunnel_required = False, err = None, clean_exit = False 15:36:54 15:36:54 def urlopen( # type: ignore[override] 15:36:54 self, 15:36:54 method: str, 15:36:54 url: str, 15:36:54 body: _TYPE_BODY | None = None, 15:36:54 headers: typing.Mapping[str, str] | None = None, 15:36:54 retries: Retry | bool | int | None = None, 15:36:54 redirect: bool = True, 15:36:54 assert_same_host: bool = True, 15:36:54 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 15:36:54 pool_timeout: int | None = None, 15:36:54 release_conn: bool | None = None, 15:36:54 chunked: bool = False, 15:36:54 body_pos: _TYPE_BODY_POSITION | None = None, 15:36:54 preload_content: bool = True, 15:36:54 decode_content: bool = True, 15:36:54 **response_kw: typing.Any, 15:36:54 ) -> BaseHTTPResponse: 15:36:54 """ 15:36:54 Get a connection from the pool and perform an HTTP request. This is the 15:36:54 lowest level call for making a request, so you'll need to specify all 15:36:54 the raw details. 15:36:54 15:36:54 .. note:: 15:36:54 15:36:54 More commonly, it's appropriate to use a convenience method 15:36:54 such as :meth:`request`. 15:36:54 15:36:54 .. note:: 15:36:54 15:36:54 `release_conn` will only behave as expected if 15:36:54 `preload_content=False` because we want to make 15:36:54 `preload_content=False` the default behaviour someday soon without 15:36:54 breaking backwards compatibility. 15:36:54 15:36:54 :param method: 15:36:54 HTTP request method (such as GET, POST, PUT, etc.) 15:36:54 15:36:54 :param url: 15:36:54 The URL to perform the request on. 15:36:54 15:36:54 :param body: 15:36:54 Data to send in the request body, either :class:`str`, :class:`bytes`, 15:36:54 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 15:36:54 15:36:54 :param headers: 15:36:54 Dictionary of custom headers to send, such as User-Agent, 15:36:54 If-None-Match, etc. If None, pool headers are used. If provided, 15:36:54 these headers completely replace any pool-specific headers. 15:36:54 15:36:54 :param retries: 15:36:54 Configure the number of retries to allow before raising a 15:36:54 :class:`~urllib3.exceptions.MaxRetryError` exception. 15:36:54 15:36:54 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 15:36:54 :class:`~urllib3.util.retry.Retry` object for fine-grained control 15:36:54 over different types of retries. 15:36:54 Pass an integer number to retry connection errors that many times, 15:36:54 but no other types of errors. Pass zero to never retry. 15:36:54 15:36:54 If ``False``, then retries are disabled and any exception is raised 15:36:54 immediately. Also, instead of raising a MaxRetryError on redirects, 15:36:54 the redirect response will be returned. 15:36:54 15:36:54 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 15:36:54 15:36:54 :param redirect: 15:36:54 If True, automatically handle redirects (status codes 301, 302, 15:36:54 303, 307, 308). Each redirect counts as a retry. Disabling retries 15:36:54 will disable redirect, too. 15:36:54 15:36:54 :param assert_same_host: 15:36:54 If ``True``, will make sure that the host of the pool requests is 15:36:54 consistent else will raise HostChangedError. When ``False``, you can 15:36:54 use the pool on an HTTP proxy and request foreign hosts. 15:36:54 15:36:54 :param timeout: 15:36:54 If specified, overrides the default timeout for this one 15:36:54 request. It may be a float (in seconds) or an instance of 15:36:54 :class:`urllib3.util.Timeout`. 15:36:54 15:36:54 :param pool_timeout: 15:36:54 If set and the pool is set to block=True, then this method will 15:36:54 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 15:36:54 connection is available within the time period. 15:36:54 15:36:54 :param bool preload_content: 15:36:54 If True, the response's body will be preloaded into memory. 15:36:54 15:36:54 :param bool decode_content: 15:36:54 If True, will attempt to decode the body based on the 15:36:54 'content-encoding' header. 15:36:54 15:36:54 :param release_conn: 15:36:54 If False, then the urlopen call will not release the connection 15:36:54 back into the pool once a response is received (but will release if 15:36:54 you read the entire contents of the response such as when 15:36:54 `preload_content=True`). This is useful if you're not preloading 15:36:54 the response's content immediately. You will need to call 15:36:54 ``r.release_conn()`` on the response ``r`` to return the connection 15:36:54 back into the pool. If None, it takes the value of ``preload_content`` 15:36:54 which defaults to ``True``. 15:36:54 15:36:54 :param bool chunked: 15:36:54 If True, urllib3 will send the body using chunked transfer 15:36:54 encoding. Otherwise, urllib3 will send the body using the standard 15:36:54 content-length form. Defaults to False. 15:36:54 15:36:54 :param int body_pos: 15:36:54 Position to seek to in file-like body in the event of a retry or 15:36:54 redirect. Typically this won't need to be set because urllib3 will 15:36:54 auto-populate the value when needed. 15:36:54 """ 15:36:54 parsed_url = parse_url(url) 15:36:54 destination_scheme = parsed_url.scheme 15:36:54 15:36:54 if headers is None: 15:36:54 headers = self.headers 15:36:54 15:36:54 if not isinstance(retries, Retry): 15:36:54 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 15:36:54 15:36:54 if release_conn is None: 15:36:54 release_conn = preload_content 15:36:54 15:36:54 # Check host 15:36:54 if assert_same_host and not self.is_same_host(url): 15:36:54 raise HostChangedError(self, url, retries) 15:36:54 15:36:54 # Ensure that the URL we're connecting to is properly encoded 15:36:54 if url.startswith("/"): 15:36:54 url = to_str(_encode_target(url)) 15:36:54 else: 15:36:54 url = to_str(parsed_url.url) 15:36:54 15:36:54 conn = None 15:36:54 15:36:54 # Track whether `conn` needs to be released before 15:36:54 # returning/raising/recursing. Update this variable if necessary, and 15:36:54 # leave `release_conn` constant throughout the function. That way, if 15:36:54 # the function recurses, the original value of `release_conn` will be 15:36:54 # passed down into the recursive call, and its value will be respected. 15:36:54 # 15:36:54 # See issue #651 [1] for details. 15:36:54 # 15:36:54 # [1] 15:36:54 release_this_conn = release_conn 15:36:54 15:36:54 http_tunnel_required = connection_requires_http_tunnel( 15:36:54 self.proxy, self.proxy_config, destination_scheme 15:36:54 ) 15:36:54 15:36:54 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 15:36:54 # have to copy the headers dict so we can safely change it without those 15:36:54 # changes being reflected in anyone else's copy. 15:36:54 if not http_tunnel_required: 15:36:54 headers = headers.copy() # type: ignore[attr-defined] 15:36:54 headers.update(self.proxy_headers) # type: ignore[union-attr] 15:36:54 15:36:54 # Must keep the exception bound to a separate variable or else Python 3 15:36:54 # complains about UnboundLocalError. 15:36:54 err = None 15:36:54 15:36:54 # Keep track of whether we cleanly exited the except block. This 15:36:54 # ensures we do proper cleanup in finally. 15:36:54 clean_exit = False 15:36:54 15:36:54 # Rewind body position, if needed. Record current position 15:36:54 # for future rewinds in the event of a redirect/retry. 15:36:54 body_pos = set_file_position(body, body_pos) 15:36:54 15:36:54 try: 15:36:54 # Request a connection from the queue. 15:36:54 timeout_obj = self._get_timeout(timeout) 15:36:54 conn = self._get_conn(timeout=pool_timeout) 15:36:54 15:36:54 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 15:36:54 15:36:54 # Is this a closed/new connection that requires CONNECT tunnelling? 15:36:54 if self.proxy is not None and http_tunnel_required and conn.is_closed: 15:36:54 try: 15:36:54 self._prepare_proxy(conn) 15:36:54 except (BaseSSLError, OSError, SocketTimeout) as e: 15:36:54 self._raise_timeout( 15:36:54 err=e, url=self.proxy.url, timeout_value=conn.timeout 15:36:54 ) 15:36:54 raise 15:36:54 15:36:54 # If we're going to release the connection in ``finally:``, then 15:36:54 # the response doesn't need to know about the connection. Otherwise 15:36:54 # it will also try to release it and we'll have a double-release 15:36:54 # mess. 15:36:54 response_conn = conn if not release_conn else None 15:36:54 15:36:54 # Make the request on the HTTPConnection object 15:36:54 > response = self._make_request( 15:36:54 conn, 15:36:54 method, 15:36:54 url, 15:36:54 timeout=timeout_obj, 15:36:54 body=body, 15:36:54 headers=headers, 15:36:54 chunked=chunked, 15:36:54 retries=retries, 15:36:54 response_conn=response_conn, 15:36:54 preload_content=preload_content, 15:36:54 decode_content=decode_content, 15:36:54 **response_kw, 15:36:54 ) 15:36:54 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 15:36:54 conn.request( 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/connection.py:500: in request 15:36:54 self.endheaders() 15:36:54 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 15:36:54 self._send_output(message_body, encode_chunked=encode_chunked) 15:36:54 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 15:36:54 self.send(msg) 15:36:54 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 15:36:54 self.connect() 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 15:36:54 self.sock = self._new_conn() 15:36:54 ^^^^^^^^^^^^^^^^ 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 15:36:54 self = 15:36:54 15:36:54 def _new_conn(self) -> socket.socket: 15:36:54 """Establish a socket connection and set nodelay settings on it. 15:36:54 15:36:54 :return: New socket connection. 15:36:54 """ 15:36:54 try: 15:36:54 sock = connection.create_connection( 15:36:54 (self._dns_host, self.port), 15:36:54 self.timeout, 15:36:54 source_address=self.source_address, 15:36:54 socket_options=self.socket_options, 15:36:54 ) 15:36:54 except socket.gaierror as e: 15:36:54 raise NameResolutionError(self.host, self, e) from e 15:36:54 except SocketTimeout as e: 15:36:54 raise ConnectTimeoutError( 15:36:54 self, 15:36:54 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 15:36:54 ) from e 15:36:54 15:36:54 except OSError as e: 15:36:54 > raise NewConnectionError( 15:36:54 self, f"Failed to establish a new connection: {e}" 15:36:54 ) from e 15:36:54 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8181): Failed to establish a new connection: [Errno 111] Connection refused 15:36:54 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 15:36:54 15:36:54 The above exception was the direct cause of the following exception: 15:36:54 15:36:54 self = 15:36:54 request = , stream = False 15:36:54 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 15:36:54 proxies = OrderedDict() 15:36:54 15:36:54 def send( 15:36:54 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 15:36:54 ): 15:36:54 """Sends PreparedRequest object. Returns Response object. 15:36:54 15:36:54 :param request: The :class:`PreparedRequest ` being sent. 15:36:54 :param stream: (optional) Whether to stream the request content. 15:36:54 :param timeout: (optional) How long to wait for the server to send 15:36:54 data before giving up, as a float, or a :ref:`(connect timeout, 15:36:54 read timeout) ` tuple. 15:36:54 :type timeout: float or tuple or urllib3 Timeout object 15:36:54 :param verify: (optional) Either a boolean, in which case it controls whether 15:36:54 we verify the server's TLS certificate, or a string, in which case it 15:36:54 must be a path to a CA bundle to use 15:36:54 :param cert: (optional) Any user-provided SSL certificate to be trusted. 15:36:54 :param proxies: (optional) The proxies dictionary to apply to the request. 15:36:54 :rtype: requests.Response 15:36:54 """ 15:36:54 15:36:54 try: 15:36:54 conn = self.get_connection_with_tls_context( 15:36:54 request, verify, proxies=proxies, cert=cert 15:36:54 ) 15:36:54 except LocationValueError as e: 15:36:54 raise InvalidURL(e, request=request) 15:36:54 15:36:54 self.cert_verify(conn, request.url, verify, cert) 15:36:54 url = self.request_url(request, proxies) 15:36:54 self.add_headers( 15:36:54 request, 15:36:54 stream=stream, 15:36:54 timeout=timeout, 15:36:54 verify=verify, 15:36:54 cert=cert, 15:36:54 proxies=proxies, 15:36:54 ) 15:36:54 15:36:54 chunked = not (request.body is None or "Content-Length" in request.headers) 15:36:54 15:36:54 if isinstance(timeout, tuple): 15:36:54 try: 15:36:54 connect, read = timeout 15:36:54 timeout = TimeoutSauce(connect=connect, read=read) 15:36:54 except ValueError: 15:36:54 raise ValueError( 15:36:54 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 15:36:54 f"or a single float to set both timeouts to the same value." 15:36:54 ) 15:36:54 elif isinstance(timeout, TimeoutSauce): 15:36:54 pass 15:36:54 else: 15:36:54 timeout = TimeoutSauce(connect=timeout, read=timeout) 15:36:54 15:36:54 try: 15:36:54 > resp = conn.urlopen( 15:36:54 method=request.method, 15:36:54 url=url, 15:36:54 body=request.body, 15:36:54 headers=request.headers, 15:36:54 redirect=False, 15:36:54 assert_same_host=False, 15:36:54 preload_content=False, 15:36:54 decode_content=False, 15:36:54 retries=self.max_retries, 15:36:54 timeout=timeout, 15:36:54 chunked=chunked, 15:36:54 ) 15:36:54 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/requests/adapters.py:644: 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 15:36:54 retries = retries.increment( 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 15:36:54 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 15:36:54 method = 'PUT' 15:36:54 url = '/rests/data/ietf-network:networks/network=openroadm-topology' 15:36:54 response = None 15:36:54 error = NewConnectionError("HTTPConnection(host='localhost', port=8181): Failed to establish a new connection: [Errno 111] Connection refused") 15:36:54 _pool = 15:36:54 _stacktrace = 15:36:54 15:36:54 def increment( 15:36:54 self, 15:36:54 method: str | None = None, 15:36:54 url: str | None = None, 15:36:54 response: BaseHTTPResponse | None = None, 15:36:54 error: Exception | None = None, 15:36:54 _pool: ConnectionPool | None = None, 15:36:54 _stacktrace: TracebackType | None = None, 15:36:54 ) -> Self: 15:36:54 """Return a new Retry object with incremented retry counters. 15:36:54 15:36:54 :param response: A response object, or None, if the server did not 15:36:54 return a response. 15:36:54 :type response: :class:`~urllib3.response.BaseHTTPResponse` 15:36:54 :param Exception error: An error encountered during the request, or 15:36:54 None if the response was received successfully. 15:36:54 15:36:54 :return: A new ``Retry`` object. 15:36:54 """ 15:36:54 if self.total is False and error: 15:36:54 # Disabled, indicate to re-raise the error. 15:36:54 raise reraise(type(error), error, _stacktrace) 15:36:54 15:36:54 total = self.total 15:36:54 if total is not None: 15:36:54 total -= 1 15:36:54 15:36:54 connect = self.connect 15:36:54 read = self.read 15:36:54 redirect = self.redirect 15:36:54 status_count = self.status 15:36:54 other = self.other 15:36:54 cause = "unknown" 15:36:54 status = None 15:36:54 redirect_location = None 15:36:54 15:36:54 if error and self._is_connection_error(error): 15:36:54 # Connect retry? 15:36:54 if connect is False: 15:36:54 raise reraise(type(error), error, _stacktrace) 15:36:54 elif connect is not None: 15:36:54 connect -= 1 15:36:54 15:36:54 elif error and self._is_read_error(error): 15:36:54 # Read retry? 15:36:54 if read is False or method is None or not self._is_method_retryable(method): 15:36:54 raise reraise(type(error), error, _stacktrace) 15:36:54 elif read is not None: 15:36:54 read -= 1 15:36:54 15:36:54 elif error: 15:36:54 # Other retry? 15:36:54 if other is not None: 15:36:54 other -= 1 15:36:54 15:36:54 elif response and response.get_redirect_location(): 15:36:54 # Redirect retry? 15:36:54 if redirect is not None: 15:36:54 redirect -= 1 15:36:54 cause = "too many redirects" 15:36:54 response_redirect_location = response.get_redirect_location() 15:36:54 if response_redirect_location: 15:36:54 redirect_location = response_redirect_location 15:36:54 status = response.status 15:36:54 15:36:54 else: 15:36:54 # Incrementing because of a server error like a 500 in 15:36:54 # status_forcelist and the given method is in the allowed_methods 15:36:54 cause = ResponseError.GENERIC_ERROR 15:36:54 if response and response.status: 15:36:54 if status_count is not None: 15:36:54 status_count -= 1 15:36:54 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 15:36:54 status = response.status 15:36:54 15:36:54 history = self.history + ( 15:36:54 RequestHistory(method, url, error, status, redirect_location), 15:36:54 ) 15:36:54 15:36:54 new_retry = self.new( 15:36:54 total=total, 15:36:54 connect=connect, 15:36:54 read=read, 15:36:54 redirect=redirect, 15:36:54 status=status_count, 15:36:54 other=other, 15:36:54 history=history, 15:36:54 ) 15:36:54 15:36:54 if new_retry.is_exhausted(): 15:36:54 reason = error or ResponseError(cause) 15:36:54 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 15:36:54 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 15:36:54 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8181): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-topology (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8181): Failed to establish a new connection: [Errno 111] Connection refused")) 15:36:54 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 15:36:54 15:36:54 During handling of the above exception, another exception occurred: 15:36:54 15:36:54 self = 15:36:54 15:36:54 def test_11_load_openroadm_topology_bi_cfg(self): 15:36:54 > response = test_utils.put_ietf_network('openroadm-topology', self.topo_bi_dir_data) 15:36:54 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 15:36:54 15:36:54 transportpce_tests/pce/test02_pce_400G.py:327: 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 transportpce_tests/common/test_utils.py:578: in put_ietf_network 15:36:54 response = put_request(url[RESTCONF_VERSION].format('{}', network), json_payload) 15:36:54 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 15:36:54 transportpce_tests/common/test_utils.py:125: in put_request 15:36:54 return requests.request( 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/requests/api.py:59: in request 15:36:54 return session.request(method=method, url=url, **kwargs) 15:36:54 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/requests/sessions.py:589: in request 15:36:54 resp = self.send(prep, **send_kwargs) 15:36:54 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/requests/sessions.py:703: in send 15:36:54 r = adapter.send(request, **kwargs) 15:36:54 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 15:36:54 self = 15:36:54 request = , stream = False 15:36:54 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 15:36:54 proxies = OrderedDict() 15:36:54 15:36:54 def send( 15:36:54 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 15:36:54 ): 15:36:54 """Sends PreparedRequest object. Returns Response object. 15:36:54 15:36:54 :param request: The :class:`PreparedRequest ` being sent. 15:36:54 :param stream: (optional) Whether to stream the request content. 15:36:54 :param timeout: (optional) How long to wait for the server to send 15:36:54 data before giving up, as a float, or a :ref:`(connect timeout, 15:36:54 read timeout) ` tuple. 15:36:54 :type timeout: float or tuple or urllib3 Timeout object 15:36:54 :param verify: (optional) Either a boolean, in which case it controls whether 15:36:54 we verify the server's TLS certificate, or a string, in which case it 15:36:54 must be a path to a CA bundle to use 15:36:54 :param cert: (optional) Any user-provided SSL certificate to be trusted. 15:36:54 :param proxies: (optional) The proxies dictionary to apply to the request. 15:36:54 :rtype: requests.Response 15:36:54 """ 15:36:54 15:36:54 try: 15:36:54 conn = self.get_connection_with_tls_context( 15:36:54 request, verify, proxies=proxies, cert=cert 15:36:54 ) 15:36:54 except LocationValueError as e: 15:36:54 raise InvalidURL(e, request=request) 15:36:54 15:36:54 self.cert_verify(conn, request.url, verify, cert) 15:36:54 url = self.request_url(request, proxies) 15:36:54 self.add_headers( 15:36:54 request, 15:36:54 stream=stream, 15:36:54 timeout=timeout, 15:36:54 verify=verify, 15:36:54 cert=cert, 15:36:54 proxies=proxies, 15:36:54 ) 15:36:54 15:36:54 chunked = not (request.body is None or "Content-Length" in request.headers) 15:36:54 15:36:54 if isinstance(timeout, tuple): 15:36:54 try: 15:36:54 connect, read = timeout 15:36:54 timeout = TimeoutSauce(connect=connect, read=read) 15:36:54 except ValueError: 15:36:54 raise ValueError( 15:36:54 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 15:36:54 f"or a single float to set both timeouts to the same value." 15:36:54 ) 15:36:54 elif isinstance(timeout, TimeoutSauce): 15:36:54 pass 15:36:54 else: 15:36:54 timeout = TimeoutSauce(connect=timeout, read=timeout) 15:36:54 15:36:54 try: 15:36:54 resp = conn.urlopen( 15:36:54 method=request.method, 15:36:54 url=url, 15:36:54 body=request.body, 15:36:54 headers=request.headers, 15:36:54 redirect=False, 15:36:54 assert_same_host=False, 15:36:54 preload_content=False, 15:36:54 decode_content=False, 15:36:54 retries=self.max_retries, 15:36:54 timeout=timeout, 15:36:54 chunked=chunked, 15:36:54 ) 15:36:54 15:36:54 except (ProtocolError, OSError) as err: 15:36:54 raise ConnectionError(err, request=request) 15:36:54 15:36:54 except MaxRetryError as e: 15:36:54 if isinstance(e.reason, ConnectTimeoutError): 15:36:54 # TODO: Remove this in 3.0.0: see #2811 15:36:54 if not isinstance(e.reason, NewConnectionError): 15:36:54 raise ConnectTimeout(e, request=request) 15:36:54 15:36:54 if isinstance(e.reason, ResponseError): 15:36:54 raise RetryError(e, request=request) 15:36:54 15:36:54 if isinstance(e.reason, _ProxyError): 15:36:54 raise ProxyError(e, request=request) 15:36:54 15:36:54 if isinstance(e.reason, _SSLError): 15:36:54 # This branch is for urllib3 v1.22 and later. 15:36:54 raise SSLError(e, request=request) 15:36:54 15:36:54 > raise ConnectionError(e, request=request) 15:36:54 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8181): Max retries exceeded with url: /rests/data/ietf-network:networks/network=openroadm-topology (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8181): Failed to establish a new connection: [Errno 111] Connection refused")) 15:36:54 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 15:36:54 ----------------------------- Captured stdout call ----------------------------- 15:36:54 execution of test_11_load_openroadm_topology_bi_cfg 15:36:54 ______ TestTransportPCEPce400g.test_12_path_computation_400G_xpdr_bi_cfg _______ 15:36:54 15:36:54 self = 15:36:54 15:36:54 def _new_conn(self) -> socket.socket: 15:36:54 """Establish a socket connection and set nodelay settings on it. 15:36:54 15:36:54 :return: New socket connection. 15:36:54 """ 15:36:54 try: 15:36:54 > sock = connection.create_connection( 15:36:54 (self._dns_host, self.port), 15:36:54 self.timeout, 15:36:54 source_address=self.source_address, 15:36:54 socket_options=self.socket_options, 15:36:54 ) 15:36:54 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/connection.py:204: 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 15:36:54 raise err 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 15:36:54 address = ('localhost', 8181), timeout = 30, source_address = None 15:36:54 socket_options = [(6, 1, 1)] 15:36:54 15:36:54 def create_connection( 15:36:54 address: tuple[str, int], 15:36:54 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 15:36:54 source_address: tuple[str, int] | None = None, 15:36:54 socket_options: _TYPE_SOCKET_OPTIONS | None = None, 15:36:54 ) -> socket.socket: 15:36:54 """Connect to *address* and return the socket object. 15:36:54 15:36:54 Convenience function. Connect to *address* (a 2-tuple ``(host, 15:36:54 port)``) and return the socket object. Passing the optional 15:36:54 *timeout* parameter will set the timeout on the socket instance 15:36:54 before attempting to connect. If no *timeout* is supplied, the 15:36:54 global default timeout setting returned by :func:`socket.getdefaulttimeout` 15:36:54 is used. If *source_address* is set it must be a tuple of (host, port) 15:36:54 for the socket to bind as a source address before making the connection. 15:36:54 An host of '' or port 0 tells the OS to use the default. 15:36:54 """ 15:36:54 15:36:54 host, port = address 15:36:54 if host.startswith("["): 15:36:54 host = host.strip("[]") 15:36:54 err = None 15:36:54 15:36:54 # Using the value from allowed_gai_family() in the context of getaddrinfo lets 15:36:54 # us select whether to work with IPv4 DNS records, IPv6 records, or both. 15:36:54 # The original create_connection function always returns all records. 15:36:54 family = allowed_gai_family() 15:36:54 15:36:54 try: 15:36:54 host.encode("idna") 15:36:54 except UnicodeError: 15:36:54 raise LocationParseError(f"'{host}', label empty or too long") from None 15:36:54 15:36:54 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 15:36:54 af, socktype, proto, canonname, sa = res 15:36:54 sock = None 15:36:54 try: 15:36:54 sock = socket.socket(af, socktype, proto) 15:36:54 15:36:54 # If provided, set socket level options before connecting. 15:36:54 _set_socket_options(sock, socket_options) 15:36:54 15:36:54 if timeout is not _DEFAULT_TIMEOUT: 15:36:54 sock.settimeout(timeout) 15:36:54 if source_address: 15:36:54 sock.bind(source_address) 15:36:54 > sock.connect(sa) 15:36:54 E ConnectionRefusedError: [Errno 111] Connection refused 15:36:54 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 15:36:54 15:36:54 The above exception was the direct cause of the following exception: 15:36:54 15:36:54 self = 15:36:54 method = 'POST' 15:36:54 url = '/rests/operations/transportpce-pce:path-computation-request' 15:36:54 body = '{"input": {"service-name": "service-1", "resource-reserve": "true", "service-handler-header": {"request-id": "request...ate": "400", "clli": "nodeC", "service-format": "Ethernet", "node-id": "XPDR-C2"}, "pce-routing-metric": "hop-count"}}' 15:36:54 headers = {'User-Agent': 'python-requests/2.32.5', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '379', 'Authorization': 'Basic YWRtaW46YWRtaW4='} 15:36:54 retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 15:36:54 redirect = False, assert_same_host = False 15:36:54 timeout = Timeout(connect=30, read=30, total=None), pool_timeout = None 15:36:54 release_conn = False, chunked = False, body_pos = None, preload_content = False 15:36:54 decode_content = False, response_kw = {} 15:36:54 parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/operations/transportpce-pce:path-computation-request', query=None, fragment=None) 15:36:54 destination_scheme = None, conn = None, release_this_conn = True 15:36:54 http_tunnel_required = False, err = None, clean_exit = False 15:36:54 15:36:54 def urlopen( # type: ignore[override] 15:36:54 self, 15:36:54 method: str, 15:36:54 url: str, 15:36:54 body: _TYPE_BODY | None = None, 15:36:54 headers: typing.Mapping[str, str] | None = None, 15:36:54 retries: Retry | bool | int | None = None, 15:36:54 redirect: bool = True, 15:36:54 assert_same_host: bool = True, 15:36:54 timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 15:36:54 pool_timeout: int | None = None, 15:36:54 release_conn: bool | None = None, 15:36:54 chunked: bool = False, 15:36:54 body_pos: _TYPE_BODY_POSITION | None = None, 15:36:54 preload_content: bool = True, 15:36:54 decode_content: bool = True, 15:36:54 **response_kw: typing.Any, 15:36:54 ) -> BaseHTTPResponse: 15:36:54 """ 15:36:54 Get a connection from the pool and perform an HTTP request. This is the 15:36:54 lowest level call for making a request, so you'll need to specify all 15:36:54 the raw details. 15:36:54 15:36:54 .. note:: 15:36:54 15:36:54 More commonly, it's appropriate to use a convenience method 15:36:54 such as :meth:`request`. 15:36:54 15:36:54 .. note:: 15:36:54 15:36:54 `release_conn` will only behave as expected if 15:36:54 `preload_content=False` because we want to make 15:36:54 `preload_content=False` the default behaviour someday soon without 15:36:54 breaking backwards compatibility. 15:36:54 15:36:54 :param method: 15:36:54 HTTP request method (such as GET, POST, PUT, etc.) 15:36:54 15:36:54 :param url: 15:36:54 The URL to perform the request on. 15:36:54 15:36:54 :param body: 15:36:54 Data to send in the request body, either :class:`str`, :class:`bytes`, 15:36:54 an iterable of :class:`str`/:class:`bytes`, or a file-like object. 15:36:54 15:36:54 :param headers: 15:36:54 Dictionary of custom headers to send, such as User-Agent, 15:36:54 If-None-Match, etc. If None, pool headers are used. If provided, 15:36:54 these headers completely replace any pool-specific headers. 15:36:54 15:36:54 :param retries: 15:36:54 Configure the number of retries to allow before raising a 15:36:54 :class:`~urllib3.exceptions.MaxRetryError` exception. 15:36:54 15:36:54 If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a 15:36:54 :class:`~urllib3.util.retry.Retry` object for fine-grained control 15:36:54 over different types of retries. 15:36:54 Pass an integer number to retry connection errors that many times, 15:36:54 but no other types of errors. Pass zero to never retry. 15:36:54 15:36:54 If ``False``, then retries are disabled and any exception is raised 15:36:54 immediately. Also, instead of raising a MaxRetryError on redirects, 15:36:54 the redirect response will be returned. 15:36:54 15:36:54 :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 15:36:54 15:36:54 :param redirect: 15:36:54 If True, automatically handle redirects (status codes 301, 302, 15:36:54 303, 307, 308). Each redirect counts as a retry. Disabling retries 15:36:54 will disable redirect, too. 15:36:54 15:36:54 :param assert_same_host: 15:36:54 If ``True``, will make sure that the host of the pool requests is 15:36:54 consistent else will raise HostChangedError. When ``False``, you can 15:36:54 use the pool on an HTTP proxy and request foreign hosts. 15:36:54 15:36:54 :param timeout: 15:36:54 If specified, overrides the default timeout for this one 15:36:54 request. It may be a float (in seconds) or an instance of 15:36:54 :class:`urllib3.util.Timeout`. 15:36:54 15:36:54 :param pool_timeout: 15:36:54 If set and the pool is set to block=True, then this method will 15:36:54 block for ``pool_timeout`` seconds and raise EmptyPoolError if no 15:36:54 connection is available within the time period. 15:36:54 15:36:54 :param bool preload_content: 15:36:54 If True, the response's body will be preloaded into memory. 15:36:54 15:36:54 :param bool decode_content: 15:36:54 If True, will attempt to decode the body based on the 15:36:54 'content-encoding' header. 15:36:54 15:36:54 :param release_conn: 15:36:54 If False, then the urlopen call will not release the connection 15:36:54 back into the pool once a response is received (but will release if 15:36:54 you read the entire contents of the response such as when 15:36:54 `preload_content=True`). This is useful if you're not preloading 15:36:54 the response's content immediately. You will need to call 15:36:54 ``r.release_conn()`` on the response ``r`` to return the connection 15:36:54 back into the pool. If None, it takes the value of ``preload_content`` 15:36:54 which defaults to ``True``. 15:36:54 15:36:54 :param bool chunked: 15:36:54 If True, urllib3 will send the body using chunked transfer 15:36:54 encoding. Otherwise, urllib3 will send the body using the standard 15:36:54 content-length form. Defaults to False. 15:36:54 15:36:54 :param int body_pos: 15:36:54 Position to seek to in file-like body in the event of a retry or 15:36:54 redirect. Typically this won't need to be set because urllib3 will 15:36:54 auto-populate the value when needed. 15:36:54 """ 15:36:54 parsed_url = parse_url(url) 15:36:54 destination_scheme = parsed_url.scheme 15:36:54 15:36:54 if headers is None: 15:36:54 headers = self.headers 15:36:54 15:36:54 if not isinstance(retries, Retry): 15:36:54 retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 15:36:54 15:36:54 if release_conn is None: 15:36:54 release_conn = preload_content 15:36:54 15:36:54 # Check host 15:36:54 if assert_same_host and not self.is_same_host(url): 15:36:54 raise HostChangedError(self, url, retries) 15:36:54 15:36:54 # Ensure that the URL we're connecting to is properly encoded 15:36:54 if url.startswith("/"): 15:36:54 url = to_str(_encode_target(url)) 15:36:54 else: 15:36:54 url = to_str(parsed_url.url) 15:36:54 15:36:54 conn = None 15:36:54 15:36:54 # Track whether `conn` needs to be released before 15:36:54 # returning/raising/recursing. Update this variable if necessary, and 15:36:54 # leave `release_conn` constant throughout the function. That way, if 15:36:54 # the function recurses, the original value of `release_conn` will be 15:36:54 # passed down into the recursive call, and its value will be respected. 15:36:54 # 15:36:54 # See issue #651 [1] for details. 15:36:54 # 15:36:54 # [1] 15:36:54 release_this_conn = release_conn 15:36:54 15:36:54 http_tunnel_required = connection_requires_http_tunnel( 15:36:54 self.proxy, self.proxy_config, destination_scheme 15:36:54 ) 15:36:54 15:36:54 # Merge the proxy headers. Only done when not using HTTP CONNECT. We 15:36:54 # have to copy the headers dict so we can safely change it without those 15:36:54 # changes being reflected in anyone else's copy. 15:36:54 if not http_tunnel_required: 15:36:54 headers = headers.copy() # type: ignore[attr-defined] 15:36:54 headers.update(self.proxy_headers) # type: ignore[union-attr] 15:36:54 15:36:54 # Must keep the exception bound to a separate variable or else Python 3 15:36:54 # complains about UnboundLocalError. 15:36:54 err = None 15:36:54 15:36:54 # Keep track of whether we cleanly exited the except block. This 15:36:54 # ensures we do proper cleanup in finally. 15:36:54 clean_exit = False 15:36:54 15:36:54 # Rewind body position, if needed. Record current position 15:36:54 # for future rewinds in the event of a redirect/retry. 15:36:54 body_pos = set_file_position(body, body_pos) 15:36:54 15:36:54 try: 15:36:54 # Request a connection from the queue. 15:36:54 timeout_obj = self._get_timeout(timeout) 15:36:54 conn = self._get_conn(timeout=pool_timeout) 15:36:54 15:36:54 conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 15:36:54 15:36:54 # Is this a closed/new connection that requires CONNECT tunnelling? 15:36:54 if self.proxy is not None and http_tunnel_required and conn.is_closed: 15:36:54 try: 15:36:54 self._prepare_proxy(conn) 15:36:54 except (BaseSSLError, OSError, SocketTimeout) as e: 15:36:54 self._raise_timeout( 15:36:54 err=e, url=self.proxy.url, timeout_value=conn.timeout 15:36:54 ) 15:36:54 raise 15:36:54 15:36:54 # If we're going to release the connection in ``finally:``, then 15:36:54 # the response doesn't need to know about the connection. Otherwise 15:36:54 # it will also try to release it and we'll have a double-release 15:36:54 # mess. 15:36:54 response_conn = conn if not release_conn else None 15:36:54 15:36:54 # Make the request on the HTTPConnection object 15:36:54 > response = self._make_request( 15:36:54 conn, 15:36:54 method, 15:36:54 url, 15:36:54 timeout=timeout_obj, 15:36:54 body=body, 15:36:54 headers=headers, 15:36:54 chunked=chunked, 15:36:54 retries=retries, 15:36:54 response_conn=response_conn, 15:36:54 preload_content=preload_content, 15:36:54 decode_content=decode_content, 15:36:54 **response_kw, 15:36:54 ) 15:36:54 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/connectionpool.py:787: 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 15:36:54 conn.request( 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/connection.py:500: in request 15:36:54 self.endheaders() 15:36:54 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1298: in endheaders 15:36:54 self._send_output(message_body, encode_chunked=encode_chunked) 15:36:54 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:1058: in _send_output 15:36:54 self.send(msg) 15:36:54 /opt/pyenv/versions/3.11.10/lib/python3.11/http/client.py:996: in send 15:36:54 self.connect() 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 15:36:54 self.sock = self._new_conn() 15:36:54 ^^^^^^^^^^^^^^^^ 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 15:36:54 self = 15:36:54 15:36:54 def _new_conn(self) -> socket.socket: 15:36:54 """Establish a socket connection and set nodelay settings on it. 15:36:54 15:36:54 :return: New socket connection. 15:36:54 """ 15:36:54 try: 15:36:54 sock = connection.create_connection( 15:36:54 (self._dns_host, self.port), 15:36:54 self.timeout, 15:36:54 source_address=self.source_address, 15:36:54 socket_options=self.socket_options, 15:36:54 ) 15:36:54 except socket.gaierror as e: 15:36:54 raise NameResolutionError(self.host, self, e) from e 15:36:54 except SocketTimeout as e: 15:36:54 raise ConnectTimeoutError( 15:36:54 self, 15:36:54 f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 15:36:54 ) from e 15:36:54 15:36:54 except OSError as e: 15:36:54 > raise NewConnectionError( 15:36:54 self, f"Failed to establish a new connection: {e}" 15:36:54 ) from e 15:36:54 E urllib3.exceptions.NewConnectionError: HTTPConnection(host='localhost', port=8181): Failed to establish a new connection: [Errno 111] Connection refused 15:36:54 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 15:36:54 15:36:54 The above exception was the direct cause of the following exception: 15:36:54 15:36:54 self = 15:36:54 request = , stream = False 15:36:54 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 15:36:54 proxies = OrderedDict() 15:36:54 15:36:54 def send( 15:36:54 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 15:36:54 ): 15:36:54 """Sends PreparedRequest object. Returns Response object. 15:36:54 15:36:54 :param request: The :class:`PreparedRequest ` being sent. 15:36:54 :param stream: (optional) Whether to stream the request content. 15:36:54 :param timeout: (optional) How long to wait for the server to send 15:36:54 data before giving up, as a float, or a :ref:`(connect timeout, 15:36:54 read timeout) ` tuple. 15:36:54 :type timeout: float or tuple or urllib3 Timeout object 15:36:54 :param verify: (optional) Either a boolean, in which case it controls whether 15:36:54 we verify the server's TLS certificate, or a string, in which case it 15:36:54 must be a path to a CA bundle to use 15:36:54 :param cert: (optional) Any user-provided SSL certificate to be trusted. 15:36:54 :param proxies: (optional) The proxies dictionary to apply to the request. 15:36:54 :rtype: requests.Response 15:36:54 """ 15:36:54 15:36:54 try: 15:36:54 conn = self.get_connection_with_tls_context( 15:36:54 request, verify, proxies=proxies, cert=cert 15:36:54 ) 15:36:54 except LocationValueError as e: 15:36:54 raise InvalidURL(e, request=request) 15:36:54 15:36:54 self.cert_verify(conn, request.url, verify, cert) 15:36:54 url = self.request_url(request, proxies) 15:36:54 self.add_headers( 15:36:54 request, 15:36:54 stream=stream, 15:36:54 timeout=timeout, 15:36:54 verify=verify, 15:36:54 cert=cert, 15:36:54 proxies=proxies, 15:36:54 ) 15:36:54 15:36:54 chunked = not (request.body is None or "Content-Length" in request.headers) 15:36:54 15:36:54 if isinstance(timeout, tuple): 15:36:54 try: 15:36:54 connect, read = timeout 15:36:54 timeout = TimeoutSauce(connect=connect, read=read) 15:36:54 except ValueError: 15:36:54 raise ValueError( 15:36:54 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 15:36:54 f"or a single float to set both timeouts to the same value." 15:36:54 ) 15:36:54 elif isinstance(timeout, TimeoutSauce): 15:36:54 pass 15:36:54 else: 15:36:54 timeout = TimeoutSauce(connect=timeout, read=timeout) 15:36:54 15:36:54 try: 15:36:54 > resp = conn.urlopen( 15:36:54 method=request.method, 15:36:54 url=url, 15:36:54 body=request.body, 15:36:54 headers=request.headers, 15:36:54 redirect=False, 15:36:54 assert_same_host=False, 15:36:54 preload_content=False, 15:36:54 decode_content=False, 15:36:54 retries=self.max_retries, 15:36:54 timeout=timeout, 15:36:54 chunked=chunked, 15:36:54 ) 15:36:54 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/requests/adapters.py:644: 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 15:36:54 retries = retries.increment( 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 15:36:54 self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 15:36:54 method = 'POST' 15:36:54 url = '/rests/operations/transportpce-pce:path-computation-request' 15:36:54 response = None 15:36:54 error = NewConnectionError("HTTPConnection(host='localhost', port=8181): Failed to establish a new connection: [Errno 111] Connection refused") 15:36:54 _pool = 15:36:54 _stacktrace = 15:36:54 15:36:54 def increment( 15:36:54 self, 15:36:54 method: str | None = None, 15:36:54 url: str | None = None, 15:36:54 response: BaseHTTPResponse | None = None, 15:36:54 error: Exception | None = None, 15:36:54 _pool: ConnectionPool | None = None, 15:36:54 _stacktrace: TracebackType | None = None, 15:36:54 ) -> Self: 15:36:54 """Return a new Retry object with incremented retry counters. 15:36:54 15:36:54 :param response: A response object, or None, if the server did not 15:36:54 return a response. 15:36:54 :type response: :class:`~urllib3.response.BaseHTTPResponse` 15:36:54 :param Exception error: An error encountered during the request, or 15:36:54 None if the response was received successfully. 15:36:54 15:36:54 :return: A new ``Retry`` object. 15:36:54 """ 15:36:54 if self.total is False and error: 15:36:54 # Disabled, indicate to re-raise the error. 15:36:54 raise reraise(type(error), error, _stacktrace) 15:36:54 15:36:54 total = self.total 15:36:54 if total is not None: 15:36:54 total -= 1 15:36:54 15:36:54 connect = self.connect 15:36:54 read = self.read 15:36:54 redirect = self.redirect 15:36:54 status_count = self.status 15:36:54 other = self.other 15:36:54 cause = "unknown" 15:36:54 status = None 15:36:54 redirect_location = None 15:36:54 15:36:54 if error and self._is_connection_error(error): 15:36:54 # Connect retry? 15:36:54 if connect is False: 15:36:54 raise reraise(type(error), error, _stacktrace) 15:36:54 elif connect is not None: 15:36:54 connect -= 1 15:36:54 15:36:54 elif error and self._is_read_error(error): 15:36:54 # Read retry? 15:36:54 if read is False or method is None or not self._is_method_retryable(method): 15:36:54 raise reraise(type(error), error, _stacktrace) 15:36:54 elif read is not None: 15:36:54 read -= 1 15:36:54 15:36:54 elif error: 15:36:54 # Other retry? 15:36:54 if other is not None: 15:36:54 other -= 1 15:36:54 15:36:54 elif response and response.get_redirect_location(): 15:36:54 # Redirect retry? 15:36:54 if redirect is not None: 15:36:54 redirect -= 1 15:36:54 cause = "too many redirects" 15:36:54 response_redirect_location = response.get_redirect_location() 15:36:54 if response_redirect_location: 15:36:54 redirect_location = response_redirect_location 15:36:54 status = response.status 15:36:54 15:36:54 else: 15:36:54 # Incrementing because of a server error like a 500 in 15:36:54 # status_forcelist and the given method is in the allowed_methods 15:36:54 cause = ResponseError.GENERIC_ERROR 15:36:54 if response and response.status: 15:36:54 if status_count is not None: 15:36:54 status_count -= 1 15:36:54 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 15:36:54 status = response.status 15:36:54 15:36:54 history = self.history + ( 15:36:54 RequestHistory(method, url, error, status, redirect_location), 15:36:54 ) 15:36:54 15:36:54 new_retry = self.new( 15:36:54 total=total, 15:36:54 connect=connect, 15:36:54 read=read, 15:36:54 redirect=redirect, 15:36:54 status=status_count, 15:36:54 other=other, 15:36:54 history=history, 15:36:54 ) 15:36:54 15:36:54 if new_retry.is_exhausted(): 15:36:54 reason = error or ResponseError(cause) 15:36:54 > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 15:36:54 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 15:36:54 E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8181): Max retries exceeded with url: /rests/operations/transportpce-pce:path-computation-request (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8181): Failed to establish a new connection: [Errno 111] Connection refused")) 15:36:54 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/urllib3/util/retry.py:535: MaxRetryError 15:36:54 15:36:54 During handling of the above exception, another exception occurred: 15:36:54 15:36:54 self = 15:36:54 15:36:54 def test_12_path_computation_400G_xpdr_bi_cfg(self): 15:36:54 path_computation_input_data = { 15:36:54 "service-name": "service-1", 15:36:54 "resource-reserve": "true", 15:36:54 "service-handler-header": { 15:36:54 "request-id": "request1" 15:36:54 }, 15:36:54 "service-a-end": { 15:36:54 "service-rate": "400", 15:36:54 "clli": "nodeA", 15:36:54 "service-format": "Ethernet", 15:36:54 "node-id": "XPDR-A2" 15:36:54 }, 15:36:54 "service-z-end": { 15:36:54 "service-rate": "400", 15:36:54 "clli": "nodeC", 15:36:54 "service-format": "Ethernet", 15:36:54 "node-id": "XPDR-C2" 15:36:54 }, 15:36:54 "pce-routing-metric": "hop-count" 15:36:54 } 15:36:54 > response = test_utils.transportpce_api_rpc_request('transportpce-pce', 15:36:54 'path-computation-request', 15:36:54 path_computation_input_data) 15:36:54 15:36:54 transportpce_tests/pce/test02_pce_400G.py:368: 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 transportpce_tests/common/test_utils.py:751: in transportpce_api_rpc_request 15:36:54 response = post_request(url, data) 15:36:54 ^^^^^^^^^^^^^^^^^^^^^^^ 15:36:54 transportpce_tests/common/test_utils.py:143: in post_request 15:36:54 return requests.request( 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/requests/api.py:59: in request 15:36:54 return session.request(method=method, url=url, **kwargs) 15:36:54 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/requests/sessions.py:589: in request 15:36:54 resp = self.send(prep, **send_kwargs) 15:36:54 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/requests/sessions.py:703: in send 15:36:54 r = adapter.send(request, **kwargs) 15:36:54 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 15:36:54 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 15:36:54 15:36:54 self = 15:36:54 request = , stream = False 15:36:54 timeout = Timeout(connect=30, read=30, total=None), verify = True, cert = None 15:36:54 proxies = OrderedDict() 15:36:54 15:36:54 def send( 15:36:54 self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 15:36:54 ): 15:36:54 """Sends PreparedRequest object. Returns Response object. 15:36:54 15:36:54 :param request: The :class:`PreparedRequest ` being sent. 15:36:54 :param stream: (optional) Whether to stream the request content. 15:36:54 :param timeout: (optional) How long to wait for the server to send 15:36:54 data before giving up, as a float, or a :ref:`(connect timeout, 15:36:54 read timeout) ` tuple. 15:36:54 :type timeout: float or tuple or urllib3 Timeout object 15:36:54 :param verify: (optional) Either a boolean, in which case it controls whether 15:36:54 we verify the server's TLS certificate, or a string, in which case it 15:36:54 must be a path to a CA bundle to use 15:36:54 :param cert: (optional) Any user-provided SSL certificate to be trusted. 15:36:54 :param proxies: (optional) The proxies dictionary to apply to the request. 15:36:54 :rtype: requests.Response 15:36:54 """ 15:36:54 15:36:54 try: 15:36:54 conn = self.get_connection_with_tls_context( 15:36:54 request, verify, proxies=proxies, cert=cert 15:36:54 ) 15:36:54 except LocationValueError as e: 15:36:54 raise InvalidURL(e, request=request) 15:36:54 15:36:54 self.cert_verify(conn, request.url, verify, cert) 15:36:54 url = self.request_url(request, proxies) 15:36:54 self.add_headers( 15:36:54 request, 15:36:54 stream=stream, 15:36:54 timeout=timeout, 15:36:54 verify=verify, 15:36:54 cert=cert, 15:36:54 proxies=proxies, 15:36:54 ) 15:36:54 15:36:54 chunked = not (request.body is None or "Content-Length" in request.headers) 15:36:54 15:36:54 if isinstance(timeout, tuple): 15:36:54 try: 15:36:54 connect, read = timeout 15:36:54 timeout = TimeoutSauce(connect=connect, read=read) 15:36:54 except ValueError: 15:36:54 raise ValueError( 15:36:54 f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 15:36:54 f"or a single float to set both timeouts to the same value." 15:36:54 ) 15:36:54 elif isinstance(timeout, TimeoutSauce): 15:36:54 pass 15:36:54 else: 15:36:54 timeout = TimeoutSauce(connect=timeout, read=timeout) 15:36:54 15:36:54 try: 15:36:54 resp = conn.urlopen( 15:36:54 method=request.method, 15:36:54 url=url, 15:36:54 body=request.body, 15:36:54 headers=request.headers, 15:36:54 redirect=False, 15:36:54 assert_same_host=False, 15:36:54 preload_content=False, 15:36:54 decode_content=False, 15:36:54 retries=self.max_retries, 15:36:54 timeout=timeout, 15:36:54 chunked=chunked, 15:36:54 ) 15:36:54 15:36:54 except (ProtocolError, OSError) as err: 15:36:54 raise ConnectionError(err, request=request) 15:36:54 15:36:54 except MaxRetryError as e: 15:36:54 if isinstance(e.reason, ConnectTimeoutError): 15:36:54 # TODO: Remove this in 3.0.0: see #2811 15:36:54 if not isinstance(e.reason, NewConnectionError): 15:36:54 raise ConnectTimeout(e, request=request) 15:36:54 15:36:54 if isinstance(e.reason, ResponseError): 15:36:54 raise RetryError(e, request=request) 15:36:54 15:36:54 if isinstance(e.reason, _ProxyError): 15:36:54 raise ProxyError(e, request=request) 15:36:54 15:36:54 if isinstance(e.reason, _SSLError): 15:36:54 # This branch is for urllib3 v1.22 and later. 15:36:54 raise SSLError(e, request=request) 15:36:54 15:36:54 > raise ConnectionError(e, request=request) 15:36:54 E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8181): Max retries exceeded with url: /rests/operations/transportpce-pce:path-computation-request (Caused by NewConnectionError("HTTPConnection(host='localhost', port=8181): Failed to establish a new connection: [Errno 111] Connection refused")) 15:36:54 15:36:54 ../.tox/testsPCE/lib/python3.11/site-packages/requests/adapters.py:677: ConnectionError 15:36:54 ----------------------------- Captured stdout call ----------------------------- 15:36:54 execution of test_12_path_computation_400G_xpdr_bi_cfg 15:36:54 =========================== short test summary info ============================ 15:36:54 FAILED transportpce_tests/pce/test02_pce_400G.py::TestTransportPCEPce400g::test_10_load_port_mapping_cfg 15:36:54 FAILED transportpce_tests/pce/test02_pce_400G.py::TestTransportPCEPce400g::test_11_load_openroadm_topology_bi_cfg 15:36:54 FAILED transportpce_tests/pce/test02_pce_400G.py::TestTransportPCEPce400g::test_12_path_computation_400G_xpdr_bi_cfg 15:36:54 ERROR transportpce_tests/pce/test02_pce_400G.py::TestTransportPCEPce400g::test_12_path_computation_400G_xpdr_bi_cfg 15:36:54 3 failed, 9 passed, 1 error in 41.89s 15:36:54 testsPCE: exit 1 (153.15 seconds) /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh pce pid=4970 15:36:55 + tox_status=143 15:36:55 + echo '---> Completed tox runs' 15:36:55 ---> Completed tox runs 15:36:55 + for i in .tox/*/log 15:36:55 ++ echo .tox/build_karaf_tests121/log 15:36:55 ++ awk -F/ '{print $2}' 15:36:55 + tox_env=build_karaf_tests121 15:36:55 + cp -r .tox/build_karaf_tests121/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests121 15:36:55 + for i in .tox/*/log 15:36:55 ++ echo .tox/build_karaf_tests190/log 15:36:55 ++ awk -F/ '{print $2}' 15:37:02 lf-activate-venv(): INFO: Adding /tmp/venv-KPUl/bin to PATH 15:37:02 INFO: Running in OpenStack, capturing instance metadata 15:37:02 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins6700571380204173811.sh 15:37:02 provisioning config files... 15:37:02 Could not find credentials [logs] for transportpce-tox-verify-transportpce-master #4318 15:37:02 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/transportpce-tox-verify-transportpce-master@tmp/config17522759859171486978tmp 15:37:02 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[odl-logs-s3-cloudfront-index] 15:37:02 Run condition [Regular expression match] enabling perform for step [Provide Configuration files] 15:37:02 provisioning config files... 15:37:03 copy managed file [jenkins-s3-log-ship] to file:/home/jenkins/.aws/credentials 15:37:03 [EnvInject] - Injecting environment variables from a build step. 15:37:03 [EnvInject] - Injecting as environment variables the properties content 15:37:03 SERVER_ID=logs 15:37:03 15:37:03 [EnvInject] - Variables injected successfully. 15:37:03 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins12893767631414778755.sh 15:37:03 ---> create-netrc.sh 15:37:03 WARN: Log server credential not found. 15:37:03 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins3420051243958243618.sh 15:37:03 ---> python-tools-install.sh 15:37:03 Setup pyenv: 15:37:03 system 15:37:03 3.8.20 15:37:03 3.9.20 15:37:03 3.10.15 15:37:03 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 15:37:03 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-KPUl from file:/tmp/.os_lf_venv 15:37:03 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 15:37:03 lf-activate-venv(): INFO: Attempting to install with network-safe options... 15:37:05 lf-activate-venv(): INFO: Base packages installed successfully 15:37:05 lf-activate-venv(): INFO: Installing additional packages: lftools 15:37:14 lf-activate-venv(): INFO: Adding /tmp/venv-KPUl/bin to PATH 15:37:14 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins7654410195385320836.sh 15:37:14 ---> sudo-logs.sh 15:37:14 Archiving 'sudo' log.. 15:37:14 [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins6648108275218049096.sh 15:37:14 ---> job-cost.sh 15:37:14 INFO: Activating Python virtual environment... 15:37:14 Setup pyenv: 15:37:14 system 15:37:14 3.8.20 15:37:14 3.9.20 15:37:14 3.10.15 15:37:14 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 15:37:14 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-KPUl from file:/tmp/.os_lf_venv 15:37:14 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 15:37:14 lf-activate-venv(): INFO: Attempting to install with network-safe options... 15:37:16 lf-activate-venv(): INFO: Base packages installed successfully 15:37:16 lf-activate-venv(): INFO: Installing additional packages: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 15:37:22 lf-activate-venv(): INFO: Adding /tmp/venv-KPUl/bin to PATH 15:37:22 INFO: No stack-cost file found 15:37:22 INFO: Instance uptime: 562s 15:37:22 INFO: Fetching instance metadata (attempt 1 of 3)... 15:37:22 DEBUG: URL: http://169.254.169.254/latest/meta-data/instance-type 15:37:23 INFO: Successfully fetched instance metadata 15:37:23 INFO: Instance type: v3-standard-4 15:37:23 INFO: Retrieving pricing info for: v3-standard-4 15:37:23 INFO: Fetching Vexxhost pricing API (attempt 1 of 3)... 15:37:23 DEBUG: URL: https://pricing.vexxhost.net/v1/pricing/v3-standard-4/cost?seconds=562 15:37:23 INFO: Successfully fetched Vexxhost pricing API 15:37:23 INFO: Retrieved cost: 0.11 15:37:23 INFO: Retrieved resource: v3-standard-4 15:37:23 INFO: Creating archive directory: /w/workspace/transportpce-tox-verify-transportpce-master/archives/cost 15:37:23 INFO: Archiving costs to: /w/workspace/transportpce-tox-verify-transportpce-master/archives/cost.csv 15:37:23 INFO: Successfully archived job cost data 15:37:23 DEBUG: Cost data: transportpce-tox-verify-transportpce-master,4318,2026-02-17 15:37:23,v3-standard-4,562,0.11,0.00,ABORTED 15:37:23 [transportpce-tox-verify-transportpce-master] $ /bin/bash -l /tmp/jenkins12706012590571126496.sh 15:37:23 ---> logs-deploy.sh 15:37:23 Setup pyenv: 15:37:23 system 15:37:23 3.8.20 15:37:23 3.9.20 15:37:23 3.10.15 15:37:23 * 3.11.10 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) 15:37:23 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-KPUl from file:/tmp/.os_lf_venv 15:37:23 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 15:37:23 lf-activate-venv(): INFO: Attempting to install with network-safe options... 15:37:25 lf-activate-venv(): INFO: Base packages installed successfully 15:37:25 lf-activate-venv(): INFO: Installing additional packages: lftools urllib3~=1.26.15 15:37:34 lf-activate-venv(): INFO: Adding /tmp/venv-KPUl/bin to PATH 15:37:34 WARNING: Nexus logging server not set 15:37:34 INFO: S3 path logs/releng/vex-yul-odl-jenkins-1/transportpce-tox-verify-transportpce-master/4318/ 15:37:34 INFO: archiving logs to S3 15:37:35 ---> uname -a: 15:37:35 Linux prd-ubuntu2204-docker-4c-16g-12958 5.15.0-168-generic #178-Ubuntu SMP Fri Jan 9 19:05:03 UTC 2026 x86_64 x86_64 x86_64 GNU/Linux 15:37:35 15:37:35 15:37:35 ---> lscpu: 15:37:35 Architecture: x86_64 15:37:35 CPU op-mode(s): 32-bit, 64-bit 15:37:35 Address sizes: 40 bits physical, 48 bits virtual 15:37:35 Byte Order: Little Endian 15:37:35 CPU(s): 4 15:37:35 On-line CPU(s) list: 0-3 15:37:35 Vendor ID: AuthenticAMD 15:37:35 Model name: AMD EPYC-Rome Processor 15:37:35 CPU family: 23 15:37:35 Model: 49 15:37:35 Thread(s) per core: 1 15:37:35 Core(s) per socket: 1 15:37:35 Socket(s): 4 15:37:35 Stepping: 0 15:37:35 BogoMIPS: 5599.99 15:37:35 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr wbnoinvd arat npt nrip_save umip rdpid arch_capabilities 15:37:35 Virtualization: AMD-V 15:37:35 Hypervisor vendor: KVM 15:37:35 Virtualization type: full 15:37:35 L1d cache: 128 KiB (4 instances) 15:37:35 L1i cache: 128 KiB (4 instances) 15:37:35 L2 cache: 2 MiB (4 instances) 15:37:35 L3 cache: 64 MiB (4 instances) 15:37:35 NUMA node(s): 1 15:37:35 NUMA node0 CPU(s): 0-3 15:37:35 Vulnerability Gather data sampling: Not affected 15:37:35 Vulnerability Indirect target selection: Not affected 15:37:35 Vulnerability Itlb multihit: Not affected 15:37:35 Vulnerability L1tf: Not affected 15:37:35 Vulnerability Mds: Not affected 15:37:35 Vulnerability Meltdown: Not affected 15:37:35 Vulnerability Mmio stale data: Not affected 15:37:35 Vulnerability Reg file data sampling: Not affected 15:37:35 Vulnerability Retbleed: Mitigation; untrained return thunk; SMT disabled 15:37:35 Vulnerability Spec rstack overflow: Mitigation; SMT disabled 15:37:35 Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp 15:37:35 Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization 15:37:35 Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected 15:37:35 Vulnerability Srbds: Not affected 15:37:35 Vulnerability Tsa: Not affected 15:37:35 Vulnerability Tsx async abort: Not affected 15:37:35 Vulnerability Vmscape: Not affected 15:37:35 15:37:35 15:37:35 ---> nproc: 15:37:35 4 15:37:35 15:37:35 15:37:35 ---> df -h: 15:37:35 Filesystem Size Used Avail Use% Mounted on 15:37:35 tmpfs 1.6G 1.1M 1.6G 1% /run 15:37:35 /dev/vda1 78G 16G 62G 21% / 15:37:35 tmpfs 7.9G 0 7.9G 0% /dev/shm 15:37:35 tmpfs 5.0M 0 5.0M 0% /run/lock 15:37:35 /dev/vda15 105M 6.1M 99M 6% /boot/efi 15:37:35 tmpfs 1.6G 4.0K 1.6G 1% /run/user/1001 15:37:35 15:37:35 15:37:35 ---> free -m: 15:37:35 total used free shared buff/cache available 15:37:35 Mem: 15989 726 4547 4 10715 14920 15:37:35 Swap: 1023 0 1023 15:37:35 15:37:35 15:37:35 ---> ip addr: 15:37:35 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 15:37:35 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 15:37:35 inet 127.0.0.1/8 scope host lo 15:37:35 valid_lft forever preferred_lft forever 15:37:35 inet6 ::1/128 scope host 15:37:35 valid_lft forever preferred_lft forever 15:37:35 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 15:37:35 link/ether fa:16:3e:5a:76:6d brd ff:ff:ff:ff:ff:ff 15:37:35 altname enp0s3 15:37:35 inet 10.30.170.215/23 metric 100 brd 10.30.171.255 scope global dynamic ens3 15:37:35 valid_lft 85833sec preferred_lft 85833sec 15:37:35 inet6 fe80::f816:3eff:fe5a:766d/64 scope link 15:37:35 valid_lft forever preferred_lft forever 15:37:35 3: docker0: mtu 1458 qdisc noqueue state DOWN group default 15:37:35 link/ether 5e:00:84:2a:b9:ac brd ff:ff:ff:ff:ff:ff 15:37:35 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 15:37:35 valid_lft forever preferred_lft forever 15:37:35 15:37:35 15:37:35 ---> sar -b -r -n DEV: 15:37:35 Linux 5.15.0-168-generic (prd-ubuntu2204-docker-4c-16g-12958) 02/17/26 _x86_64_ (4 CPU) 15:37:35 15:37:35 15:28:11 LINUX RESTART (4 CPU) 15:37:35 15:37:35 15:37:35 ---> sar -P ALL: 15:37:35 Linux 5.15.0-168-generic (prd-ubuntu2204-docker-4c-16g-12958) 02/17/26 _x86_64_ (4 CPU) 15:37:35 15:37:35 15:28:11 LINUX RESTART (4 CPU) 15:37:35 15:37:35 15:37:35