Triggered by Gerrit: https://git.opendaylight.org/gerrit/c/transportpce/+/114243 Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu2004-docker-4c-16g-2534 (ubuntu2004-docker-4c-16g) in workspace /w/workspace/transportpce-tox-verify-scandium [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-gKz7HeXMIGKz/agent.12440 SSH_AGENT_PID=12442 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/transportpce-tox-verify-scandium@tmp/private_key_8139319235239300253.key (/w/workspace/transportpce-tox-verify-scandium@tmp/private_key_8139319235239300253.key) [ssh-agent] Using credentials jenkins (jenkins-ssh) The recommended git tool is: NONE using credential jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://devvexx.opendaylight.org/mirror/transportpce > git init /w/workspace/transportpce-tox-verify-scandium # timeout=10 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/transportpce > git --version # timeout=10 > git --version # 'git version 2.25.1' using GIT_SSH to set credentials jenkins-ssh Verifying host key using known hosts file You're using 'Known hosts file' strategy to verify ssh host keys, but your known_hosts file does not exist, please go to 'Manage Jenkins' -> 'Security' -> 'Git Host Key Verification Configuration' and configure host key verification. > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/transportpce +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/transportpce # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/transportpce # timeout=10 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/transportpce using GIT_SSH to set credentials jenkins-ssh Verifying host key using known hosts file You're using 'Known hosts file' strategy to verify ssh host keys, but your known_hosts file does not exist, please go to 'Manage Jenkins' -> 'Security' -> 'Git Host Key Verification Configuration' and configure host key verification. > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/transportpce refs/changes/43/114243/1 # timeout=10 > git rev-parse 66bfcab04270b3259324ea56a6d77ed23e62eacc^{commit} # timeout=10 Checking out Revision 66bfcab04270b3259324ea56a6d77ed23e62eacc (refs/changes/43/114243/1) > git config core.sparsecheckout # timeout=10 > git checkout -f 66bfcab04270b3259324ea56a6d77ed23e62eacc # timeout=10 Commit message: "Fixup javadoc checkstyle issues" > git rev-parse FETCH_HEAD^{commit} # timeout=10 > git rev-list --no-walk 1e14b85219c971f15b7195093a0c58094394cf5c # timeout=10 > git remote # timeout=10 > git submodule init # timeout=10 > git submodule sync # timeout=10 > git config --get remote.origin.url # timeout=10 > git submodule init # timeout=10 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 ERROR: No submodules found. provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins17146311361695645100.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.13 (set by /opt/pyenv/version) * 3.11.7 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-k8Tq lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-k8Tq/bin to PATH Generating Requirements File Python 3.11.7 pip 24.3.1 from /tmp/venv-k8Tq/lib/python3.11/site-packages/pip (python 3.11) appdirs==1.4.4 argcomplete==3.5.1 aspy.yaml==1.3.0 attrs==24.2.0 autopage==0.5.2 beautifulsoup4==4.12.3 boto3==1.35.50 botocore==1.35.50 bs4==0.0.2 cachetools==5.5.0 certifi==2024.8.30 cffi==1.17.1 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.4.0 click==8.1.7 cliff==4.7.0 cmd2==2.5.0 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.9 dnspython==2.7.0 docker==4.2.2 dogpile.cache==1.3.3 durationpy==0.9 email_validator==2.2.0 filelock==3.16.1 future==1.0.0 gitdb==4.0.11 GitPython==3.1.43 google-auth==2.35.0 httplib2==0.22.0 identify==2.6.1 idna==3.10 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.4 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.23.0 jsonschema-specifications==2024.10.1 keystoneauth1==5.8.0 kubernetes==31.0.0 lftools==0.37.10 lxml==5.3.0 MarkupSafe==3.0.2 msgpack==1.1.0 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 netifaces==0.11.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==4.1.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.1.0 oslo.config==9.6.0 oslo.context==5.6.0 oslo.i18n==6.4.0 oslo.log==6.1.2 oslo.serialization==5.5.0 oslo.utils==7.3.0 packaging==24.1 pbr==6.1.0 platformdirs==4.3.6 prettytable==3.11.0 pyasn1==0.6.1 pyasn1_modules==0.4.1 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.4.0 PyJWT==2.9.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.9.0 pyrsistent==0.20.0 python-cinderclient==9.6.0 python-dateutil==2.9.0.post0 python-heatclient==4.0.0 python-jenkins==1.8.2 python-keystoneclient==5.5.0 python-magnumclient==4.7.0 python-openstackclient==7.2.1 python-swiftclient==4.6.0 PyYAML==6.0.2 referencing==0.35.1 requests==2.32.3 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.20.0 rsa==4.9 ruamel.yaml==0.18.6 ruamel.yaml.clib==0.2.12 s3transfer==0.10.3 simplejson==3.19.3 six==1.16.0 smmap==5.0.1 soupsieve==2.6 stevedore==5.3.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.13.2 tqdm==4.66.6 typing_extensions==4.12.2 tzdata==2024.2 urllib3==1.26.20 virtualenv==20.27.1 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.14.2 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content PYTHON=python3 [EnvInject] - Variables injected successfully. [transportpce-tox-verify-scandium] $ /bin/bash -l /tmp/jenkins17110266964149895942.sh ---> tox-install.sh + source /home/jenkins/lf-env.sh + lf-activate-venv --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 ++ mktemp -d /tmp/venv-XXXX + lf_venv=/tmp/venv-GThG + local venv_file=/tmp/.os_lf_venv + local python=python3 + local options + local set_path=true + local install_args= ++ getopt -o np:v: -l no-path,system-site-packages,python:,venv-file: -n lf-activate-venv -- --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 + options=' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' + eval set -- ' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' ++ set -- --venv-file /tmp/.toxenv -- tox virtualenv urllib3~=1.26.15 + true + case $1 in + venv_file=/tmp/.toxenv + shift 2 + true + case $1 in + shift + break + case $python in + local pkg_list= + [[ -d /opt/pyenv ]] + echo 'Setup pyenv:' Setup pyenv: + export PYENV_ROOT=/opt/pyenv + PYENV_ROOT=/opt/pyenv + export PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin + PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin + pyenv versions system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-scandium/.python-version) + command -v pyenv ++ pyenv init - --no-rehash + eval 'PATH="$(bash --norc -ec '\''IFS=:; paths=($PATH); for i in ${!paths[@]}; do if [[ ${paths[i]} == "'\'''\''/opt/pyenv/shims'\'''\''" ]]; then unset '\''\'\'''\''paths[i]'\''\'\'''\''; fi; done; echo "${paths[*]}"'\'')" export PATH="/opt/pyenv/shims:${PATH}" export PYENV_SHELL=bash source '\''/opt/pyenv/libexec/../completions/pyenv.bash'\'' pyenv() { local command command="${1:-}" if [ "$#" -gt 0 ]; then shift fi case "$command" in rehash|shell) eval "$(pyenv "sh-$command" "$@")" ;; *) command pyenv "$command" "$@" ;; esac }' +++ bash --norc -ec 'IFS=:; paths=($PATH); for i in ${!paths[@]}; do if [[ ${paths[i]} == "/opt/pyenv/shims" ]]; then unset '\''paths[i]'\''; fi; done; echo "${paths[*]}"' ++ PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin ++ export PATH=/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin ++ PATH=/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin ++ export PYENV_SHELL=bash ++ PYENV_SHELL=bash ++ source /opt/pyenv/libexec/../completions/pyenv.bash +++ complete -F _pyenv pyenv ++ lf-pyver python3 ++ local py_version_xy=python3 ++ local py_version_xyz= ++ pyenv versions ++ local command ++ command=versions ++ '[' 1 -gt 0 ']' ++ shift ++ case "$command" in ++ command pyenv versions ++ pyenv versions ++ awk '{ print $1 }' ++ sed 's/^[ *]* //' ++ grep -E '^[0-9.]*[0-9]$' ++ [[ ! -s /tmp/.pyenv_versions ]] +++ grep '^3' /tmp/.pyenv_versions +++ sort -V +++ tail -n 1 ++ py_version_xyz=3.11.7 ++ [[ -z 3.11.7 ]] ++ echo 3.11.7 ++ return 0 + pyenv local 3.11.7 + local command + command=local + '[' 2 -gt 0 ']' + shift + case "$command" in + command pyenv local 3.11.7 + pyenv local 3.11.7 + for arg in "$@" + case $arg in + pkg_list+='tox ' + for arg in "$@" + case $arg in + pkg_list+='virtualenv ' + for arg in "$@" + case $arg in + pkg_list+='urllib3~=1.26.15 ' + [[ -f /tmp/.toxenv ]] + [[ ! -f /tmp/.toxenv ]] + [[ -n '' ]] + python3 -m venv /tmp/venv-GThG + echo 'lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-GThG' lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-GThG + echo /tmp/venv-GThG + echo 'lf-activate-venv(): INFO: Save venv in file: /tmp/.toxenv' lf-activate-venv(): INFO: Save venv in file: /tmp/.toxenv + /tmp/venv-GThG/bin/python3 -m pip install --upgrade --quiet pip virtualenv + [[ -z tox virtualenv urllib3~=1.26.15 ]] + echo 'lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 ' lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 + /tmp/venv-GThG/bin/python3 -m pip install --upgrade --quiet --upgrade-strategy eager tox virtualenv urllib3~=1.26.15 + type python3 + true + echo 'lf-activate-venv(): INFO: Adding /tmp/venv-GThG/bin to PATH' lf-activate-venv(): INFO: Adding /tmp/venv-GThG/bin to PATH + PATH=/tmp/venv-GThG/bin:/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin + return 0 + python3 --version Python 3.11.7 + python3 -m pip --version pip 24.3.1 from /tmp/venv-GThG/lib/python3.11/site-packages/pip (python 3.11) + python3 -m pip freeze cachetools==5.5.0 chardet==5.2.0 colorama==0.4.6 distlib==0.3.9 filelock==3.16.1 packaging==24.1 platformdirs==4.3.6 pluggy==1.5.0 pyproject-api==1.8.0 tox==4.23.2 urllib3==1.26.20 virtualenv==20.27.1 [transportpce-tox-verify-scandium] $ /bin/sh -xe /tmp/jenkins14261725133907614480.sh [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content PARALLEL=True [EnvInject] - Variables injected successfully. [transportpce-tox-verify-scandium] $ /bin/bash -l /tmp/jenkins18253695200068219071.sh ---> tox-run.sh + PATH=/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin + ARCHIVE_TOX_DIR=/w/workspace/transportpce-tox-verify-scandium/archives/tox + ARCHIVE_DOC_DIR=/w/workspace/transportpce-tox-verify-scandium/archives/docs + mkdir -p /w/workspace/transportpce-tox-verify-scandium/archives/tox + cd /w/workspace/transportpce-tox-verify-scandium/. + source /home/jenkins/lf-env.sh + lf-activate-venv --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 ++ mktemp -d /tmp/venv-XXXX + lf_venv=/tmp/venv-dMBA + local venv_file=/tmp/.os_lf_venv + local python=python3 + local options + local set_path=true + local install_args= ++ getopt -o np:v: -l no-path,system-site-packages,python:,venv-file: -n lf-activate-venv -- --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 + options=' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' + eval set -- ' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' ++ set -- --venv-file /tmp/.toxenv -- tox virtualenv urllib3~=1.26.15 + true + case $1 in + venv_file=/tmp/.toxenv + shift 2 + true + case $1 in + shift + break + case $python in + local pkg_list= + [[ -d /opt/pyenv ]] + echo 'Setup pyenv:' Setup pyenv: + export PYENV_ROOT=/opt/pyenv + PYENV_ROOT=/opt/pyenv + export PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin + PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin + pyenv versions system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-scandium/.python-version) + command -v pyenv ++ pyenv init - --no-rehash + eval 'PATH="$(bash --norc -ec '\''IFS=:; paths=($PATH); for i in ${!paths[@]}; do if [[ ${paths[i]} == "'\'''\''/opt/pyenv/shims'\'''\''" ]]; then unset '\''\'\'''\''paths[i]'\''\'\'''\''; fi; done; echo "${paths[*]}"'\'')" export PATH="/opt/pyenv/shims:${PATH}" export PYENV_SHELL=bash source '\''/opt/pyenv/libexec/../completions/pyenv.bash'\'' pyenv() { local command command="${1:-}" if [ "$#" -gt 0 ]; then shift fi case "$command" in rehash|shell) eval "$(pyenv "sh-$command" "$@")" ;; *) command pyenv "$command" "$@" ;; esac }' +++ bash --norc -ec 'IFS=:; paths=($PATH); for i in ${!paths[@]}; do if [[ ${paths[i]} == "/opt/pyenv/shims" ]]; then unset '\''paths[i]'\''; fi; done; echo "${paths[*]}"' ++ PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin ++ export PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin ++ PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin ++ export PYENV_SHELL=bash ++ PYENV_SHELL=bash ++ source /opt/pyenv/libexec/../completions/pyenv.bash +++ complete -F _pyenv pyenv ++ lf-pyver python3 ++ local py_version_xy=python3 ++ local py_version_xyz= ++ pyenv versions ++ local command ++ command=versions ++ '[' 1 -gt 0 ']' ++ shift ++ case "$command" in ++ command pyenv versions ++ pyenv versions ++ sed 's/^[ *]* //' ++ grep -E '^[0-9.]*[0-9]$' ++ awk '{ print $1 }' ++ [[ ! -s /tmp/.pyenv_versions ]] +++ grep '^3' /tmp/.pyenv_versions +++ tail -n 1 +++ sort -V ++ py_version_xyz=3.11.7 ++ [[ -z 3.11.7 ]] ++ echo 3.11.7 ++ return 0 + pyenv local 3.11.7 + local command + command=local + '[' 2 -gt 0 ']' + shift + case "$command" in + command pyenv local 3.11.7 + pyenv local 3.11.7 + for arg in "$@" + case $arg in + pkg_list+='tox ' + for arg in "$@" + case $arg in + pkg_list+='virtualenv ' + for arg in "$@" + case $arg in + pkg_list+='urllib3~=1.26.15 ' + [[ -f /tmp/.toxenv ]] ++ cat /tmp/.toxenv + lf_venv=/tmp/venv-GThG + echo 'lf-activate-venv(): INFO: Reuse venv:/tmp/venv-GThG from' file:/tmp/.toxenv lf-activate-venv(): INFO: Reuse venv:/tmp/venv-GThG from file:/tmp/.toxenv + /tmp/venv-GThG/bin/python3 -m pip install --upgrade --quiet pip virtualenv + [[ -z tox virtualenv urllib3~=1.26.15 ]] + echo 'lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 ' lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 + /tmp/venv-GThG/bin/python3 -m pip install --upgrade --quiet --upgrade-strategy eager tox virtualenv urllib3~=1.26.15 + type python3 + true + echo 'lf-activate-venv(): INFO: Adding /tmp/venv-GThG/bin to PATH' lf-activate-venv(): INFO: Adding /tmp/venv-GThG/bin to PATH + PATH=/tmp/venv-GThG/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin + return 0 + [[ -d /opt/pyenv ]] + echo '---> Setting up pyenv' ---> Setting up pyenv + export PYENV_ROOT=/opt/pyenv + PYENV_ROOT=/opt/pyenv + export PATH=/opt/pyenv/bin:/tmp/venv-GThG/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin + PATH=/opt/pyenv/bin:/tmp/venv-GThG/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin ++ pwd + PYTHONPATH=/w/workspace/transportpce-tox-verify-scandium + export PYTHONPATH + export TOX_TESTENV_PASSENV=PYTHONPATH + TOX_TESTENV_PASSENV=PYTHONPATH + tox --version 4.23.2 from /tmp/venv-GThG/lib/python3.11/site-packages/tox/__init__.py + PARALLEL=True + TOX_OPTIONS_LIST= + [[ -n '' ]] + case ${PARALLEL,,} in + TOX_OPTIONS_LIST=' --parallel auto --parallel-live' + tox --parallel auto --parallel-live + tee -a /w/workspace/transportpce-tox-verify-scandium/archives/tox/tox.log buildcontroller: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt checkbashisms: freeze> python -m pip freeze --all docs-linkcheck: install_deps> python -I -m pip install -r docs/requirements.txt docs: install_deps> python -I -m pip install -r docs/requirements.txt checkbashisms: pip==24.3.1,setuptools==75.2.0,wheel==0.44.0 checkbashisms: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./fixCIcentOS8reposMirrors.sh checkbashisms: commands[1] /w/workspace/transportpce-tox-verify-scandium/tests> sh -c 'command checkbashisms>/dev/null || sudo yum install -y devscripts-checkbashisms || sudo yum install -y devscripts-minimal || sudo yum install -y devscripts || sudo yum install -y https://archives.fedoraproject.org/pub/archive/fedora/linux/releases/31/Everything/x86_64/os/Packages/d/devscripts-checkbashisms-2.19.6-2.fc31.x86_64.rpm || (echo "checkbashisms command not found - please install it (e.g. sudo apt-get install devscripts | yum install devscripts-minimal )" >&2 && exit 1)' checkbashisms: commands[2] /w/workspace/transportpce-tox-verify-scandium/tests> find . -not -path '*/\.*' -name '*.sh' -exec checkbashisms -f '{}' + script ./reflectwarn.sh does not appear to have a #! interpreter line; you may get strange results checkbashisms: OK ✔ in 3.08 seconds pre-commit: install_deps> python -I -m pip install pre-commit pre-commit: freeze> python -m pip freeze --all pre-commit: cfgv==3.4.0,distlib==0.3.9,filelock==3.16.1,identify==2.6.1,nodeenv==1.9.1,pip==24.3.1,platformdirs==4.3.6,pre_commit==4.0.1,PyYAML==6.0.2,setuptools==75.2.0,virtualenv==20.27.1,wheel==0.44.0 pre-commit: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./fixCIcentOS8reposMirrors.sh pre-commit: commands[1] /w/workspace/transportpce-tox-verify-scandium/tests> sh -c 'which cpan || sudo yum install -y perl-CPAN || (echo "cpan command not found - please install it (e.g. sudo apt-get install perl-modules | yum install perl-CPAN )" >&2 && exit 1)' /usr/bin/cpan pre-commit: commands[2] /w/workspace/transportpce-tox-verify-scandium/tests> pre-commit run --all-files --show-diff-on-failure [WARNING] hook id `remove-tabs` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. [WARNING] hook id `perltidy` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. [INFO] Initializing environment for https://github.com/pre-commit/pre-commit-hooks. [WARNING] repo `https://github.com/pre-commit/pre-commit-hooks` uses deprecated stage names (commit, push) which will be removed in a future version. Hint: often `pre-commit autoupdate --repo https://github.com/pre-commit/pre-commit-hooks` will fix this. if it does not -- consider reporting an issue to that repo. [INFO] Initializing environment for https://github.com/jorisroovers/gitlint. buildcontroller: freeze> python -m pip freeze --all [INFO] Initializing environment for https://github.com/jorisroovers/gitlint:./gitlint-core[trusted-deps]. buildcontroller: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.3.1,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 buildcontroller: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./build_controller.sh + update-java-alternatives -l java-1.11.0-openjdk-amd64 1111 /usr/lib/jvm/java-1.11.0-openjdk-amd64 java-1.12.0-openjdk-amd64 1211 /usr/lib/jvm/java-1.12.0-openjdk-amd64 java-1.17.0-openjdk-amd64 1711 /usr/lib/jvm/java-1.17.0-openjdk-amd64 java-1.21.0-openjdk-amd64 2111 /usr/lib/jvm/java-1.21.0-openjdk-amd64 + sudo update-java-alternatives -s java-1.21.0-openjdk-amd64 java-1.8.0-openjdk-amd64 1081 /usr/lib/jvm/java-1.8.0-openjdk-amd64 [INFO] Initializing environment for https://github.com/Lucas-C/pre-commit-hooks. + java -version + sed -n ;s/.* version "\(.*\)\.\(.*\)\..*".*$/\1/p; [INFO] Initializing environment for https://github.com/pre-commit/mirrors-autopep8. + JAVA_VER=21 + echo 21 21 + javac -version + sed -n ;s/javac \(.*\)\.\(.*\)\..*.*$/\1/p; [INFO] Initializing environment for https://github.com/perltidy/perltidy. + JAVAC_VER=21 + echo 21 21 ok, java is 21 or newer + [ 21 -ge 21 ] + [ 21 -ge 21 ] + echo ok, java is 21 or newer + wget -nv https://dlcdn.apache.org/maven/maven-3/3.9.8/binaries/apache-maven-3.9.8-bin.tar.gz -P /tmp 2024-10-29 13:03:28 URL:https://dlcdn.apache.org/maven/maven-3/3.9.8/binaries/apache-maven-3.9.8-bin.tar.gz [9083702/9083702] -> "/tmp/apache-maven-3.9.8-bin.tar.gz" [1] + sudo mkdir -p /opt + sudo tar xf /tmp/apache-maven-3.9.8-bin.tar.gz -C /opt + sudo ln -s /opt/apache-maven-3.9.8 /opt/maven + sudo ln -s /opt/maven/bin/mvn /usr/bin/mvn + mvn --version [INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... Apache Maven 3.9.8 (36645f6c9b5079805ea5009217e36f2cffd34256) Maven home: /opt/maven Java version: 21.0.4, vendor: Ubuntu, runtime: /usr/lib/jvm/java-21-openjdk-amd64 Default locale: en, platform encoding: UTF-8 OS name: "linux", version: "5.4.0-190-generic", arch: "amd64", family: "unix" NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.nio.charset=ALL-UNNAMED --add-opens=java.base/java.nio.file=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.jar=ALL-UNNAMED --add-opens=java.base/java.util.stream=ALL-UNNAMED --add-opens=java.base/java.util.zip=ALL-UNNAMED --add-opens java.base/sun.nio.ch=ALL-UNNAMED --add-opens java.base/sun.nio.fs=ALL-UNNAMED -Xlog:disable [INFO] Installing environment for https://github.com/Lucas-C/pre-commit-hooks. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... [INFO] Installing environment for https://github.com/pre-commit/mirrors-autopep8. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... [INFO] Installing environment for https://github.com/perltidy/perltidy. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... docs: freeze> python -m pip freeze --all docs-linkcheck: freeze> python -m pip freeze --all docs: alabaster==1.0.0,attrs==24.2.0,babel==2.16.0,blockdiag==3.0.0,certifi==2024.8.30,charset-normalizer==3.4.0,contourpy==1.3.0,cycler==0.12.1,docutils==0.21.2,fonttools==4.54.1,funcparserlib==2.0.0a0,future==1.0.0,idna==3.10,imagesize==1.4.1,Jinja2==3.1.4,jsonschema==3.2.0,kiwisolver==1.4.7,lfdocs-conf==0.9.0,MarkupSafe==3.0.2,matplotlib==3.9.2,numpy==2.1.2,nwdiag==3.0.0,packaging==24.1,pillow==11.0.0,pip==24.3.1,Pygments==2.18.0,pyparsing==3.2.0,pyrsistent==0.20.0,python-dateutil==2.9.0.post0,PyYAML==6.0.2,requests==2.32.3,requests-file==1.5.1,seqdiag==3.0.0,setuptools==75.2.0,six==1.16.0,snowballstemmer==2.2.0,Sphinx==8.1.3,sphinx-bootstrap-theme==0.8.1,sphinx-data-viewer==0.1.5,sphinx-rtd-theme==3.0.1,sphinx-tabs==3.4.7,sphinxcontrib-applehelp==2.0.0,sphinxcontrib-blockdiag==3.0.0,sphinxcontrib-devhelp==2.0.0,sphinxcontrib-htmlhelp==2.1.0,sphinxcontrib-jquery==4.1,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-needs==0.7.9,sphinxcontrib-nwdiag==2.0.0,sphinxcontrib-plantuml==0.30,sphinxcontrib-qthelp==2.0.0,sphinxcontrib-seqdiag==3.0.0,sphinxcontrib-serializinghtml==2.0.0,sphinxcontrib-swaggerdoc==0.1.7,urllib3==2.2.3,webcolors==24.8.0,wheel==0.44.0 docs: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> sphinx-build -q -W --keep-going -b html -n -d /w/workspace/transportpce-tox-verify-scandium/.tox/docs/tmp/doctrees ../docs/ /w/workspace/transportpce-tox-verify-scandium/docs/_build/html docs-linkcheck: alabaster==1.0.0,attrs==24.2.0,babel==2.16.0,blockdiag==3.0.0,certifi==2024.8.30,charset-normalizer==3.4.0,contourpy==1.3.0,cycler==0.12.1,docutils==0.21.2,fonttools==4.54.1,funcparserlib==2.0.0a0,future==1.0.0,idna==3.10,imagesize==1.4.1,Jinja2==3.1.4,jsonschema==3.2.0,kiwisolver==1.4.7,lfdocs-conf==0.9.0,MarkupSafe==3.0.2,matplotlib==3.9.2,numpy==2.1.2,nwdiag==3.0.0,packaging==24.1,pillow==11.0.0,pip==24.3.1,Pygments==2.18.0,pyparsing==3.2.0,pyrsistent==0.20.0,python-dateutil==2.9.0.post0,PyYAML==6.0.2,requests==2.32.3,requests-file==1.5.1,seqdiag==3.0.0,setuptools==75.2.0,six==1.16.0,snowballstemmer==2.2.0,Sphinx==8.1.3,sphinx-bootstrap-theme==0.8.1,sphinx-data-viewer==0.1.5,sphinx-rtd-theme==3.0.1,sphinx-tabs==3.4.7,sphinxcontrib-applehelp==2.0.0,sphinxcontrib-blockdiag==3.0.0,sphinxcontrib-devhelp==2.0.0,sphinxcontrib-htmlhelp==2.1.0,sphinxcontrib-jquery==4.1,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-needs==0.7.9,sphinxcontrib-nwdiag==2.0.0,sphinxcontrib-plantuml==0.30,sphinxcontrib-qthelp==2.0.0,sphinxcontrib-seqdiag==3.0.0,sphinxcontrib-serializinghtml==2.0.0,sphinxcontrib-swaggerdoc==0.1.7,urllib3==2.2.3,webcolors==24.8.0,wheel==0.44.0 docs-linkcheck: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> sphinx-build -q -b linkcheck -d /w/workspace/transportpce-tox-verify-scandium/.tox/docs-linkcheck/tmp/doctrees ../docs/ /w/workspace/transportpce-tox-verify-scandium/docs/_build/linkcheck docs: OK ✔ in 32.65 seconds pylint: install_deps> python -I -m pip install 'pylint>=2.6.0' trim trailing whitespace.................................................Passed Tabs remover.............................................................Passed autopep8.................................................................docs-linkcheck: OK ✔ in 33.5 seconds pylint: freeze> python -m pip freeze --all pylint: astroid==3.3.5,dill==0.3.9,isort==5.13.2,mccabe==0.7.0,pip==24.3.1,platformdirs==4.3.6,pylint==3.3.1,setuptools==75.2.0,tomlkit==0.13.2,wheel==0.44.0 pylint: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> find transportpce_tests/ -name '*.py' -exec pylint --fail-under=10 --max-line-length=120 --disable=missing-docstring,import-error --disable=fixme --disable=duplicate-code '--module-rgx=([a-z0-9_]+$)|([0-9.]{1,30}$)' '--method-rgx=(([a-z_][a-zA-Z0-9_]{2,})|(_[a-z0-9_]*)|(__[a-zA-Z][a-zA-Z0-9_]+__))$' '--variable-rgx=[a-zA-Z_][a-zA-Z0-9_]{1,30}$' '{}' + Passed perltidy.................................................................Passed pre-commit: commands[3] /w/workspace/transportpce-tox-verify-scandium/tests> pre-commit run gitlint-ci --hook-stage manual [WARNING] hook id `remove-tabs` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. [WARNING] hook id `perltidy` uses deprecated stage names (commit) which will be removed in a future version. run: `pre-commit migrate-config` to automatically fix this. [INFO] Installing environment for https://github.com/jorisroovers/gitlint. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... gitlint..................................................................Passed ------------------------------------ Your code has been rated at 10.00/10 pre-commit: OK ✔ in 50.83 seconds pylint: OK ✔ in 28.08 seconds buildcontroller: OK ✔ in 1 minute 45.47 seconds sims: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt testsPCE: install_deps> python -I -m pip install gnpy4tpce==2.4.7 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt build_karaf_tests221: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt build_karaf_tests121: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt build_karaf_tests221: freeze> python -m pip freeze --all sims: freeze> python -m pip freeze --all build_karaf_tests121: freeze> python -m pip freeze --all build_karaf_tests221: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.3.1,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 build_karaf_tests221: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./build_karaf_for_tests.sh sims: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.3.1,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 sims: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./install_lightynode.sh Using lighynode version 20.1.0.2 Installing lightynode device to ./lightynode/lightynode-openroadm-device directory NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.nio.charset=ALL-UNNAMED --add-opens=java.base/java.nio.file=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.jar=ALL-UNNAMED --add-opens=java.base/java.util.stream=ALL-UNNAMED --add-opens=java.base/java.util.zip=ALL-UNNAMED --add-opens java.base/sun.nio.ch=ALL-UNNAMED --add-opens java.base/sun.nio.fs=ALL-UNNAMED -Xlog:disable build_karaf_tests121: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.3.1,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 build_karaf_tests121: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./build_karaf_for_tests.sh NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.nio.charset=ALL-UNNAMED --add-opens=java.base/java.nio.file=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.jar=ALL-UNNAMED --add-opens=java.base/java.util.stream=ALL-UNNAMED --add-opens=java.base/java.util.zip=ALL-UNNAMED --add-opens java.base/sun.nio.ch=ALL-UNNAMED --add-opens java.base/sun.nio.fs=ALL-UNNAMED -Xlog:disable sims: OK ✔ in 13.27 seconds build_karaf_tests71: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt build_karaf_tests71: freeze> python -m pip freeze --all build_karaf_tests71: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.3.1,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 build_karaf_tests71: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./build_karaf_for_tests.sh NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.nio.charset=ALL-UNNAMED --add-opens=java.base/java.nio.file=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.jar=ALL-UNNAMED --add-opens=java.base/java.util.stream=ALL-UNNAMED --add-opens=java.base/java.util.zip=ALL-UNNAMED --add-opens java.base/sun.nio.ch=ALL-UNNAMED --add-opens java.base/sun.nio.fs=ALL-UNNAMED -Xlog:disable build_karaf_tests221: OK ✔ in 52.63 seconds build_karaf_tests_hybrid: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt build_karaf_tests121: OK ✔ in 53.97 seconds tests_tapi: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt build_karaf_tests_hybrid: freeze> python -m pip freeze --all build_karaf_tests_hybrid: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.3.1,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 build_karaf_tests_hybrid: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./build_karaf_for_tests.sh NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.nio.charset=ALL-UNNAMED --add-opens=java.base/java.nio.file=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.jar=ALL-UNNAMED --add-opens=java.base/java.util.stream=ALL-UNNAMED --add-opens=java.base/java.util.zip=ALL-UNNAMED --add-opens java.base/sun.nio.ch=ALL-UNNAMED --add-opens java.base/sun.nio.fs=ALL-UNNAMED -Xlog:disable testsPCE: freeze> python -m pip freeze --all testsPCE: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,click==8.1.7,contourpy==1.3.0,cryptography==3.3.2,cycler==0.12.1,dict2xml==1.7.6,Flask==2.1.3,Flask-Injector==0.14.0,fonttools==4.54.1,gnpy4tpce==2.4.7,idna==3.10,iniconfig==2.0.0,injector==0.22.0,itsdangerous==2.2.0,Jinja2==3.1.4,kiwisolver==1.4.7,lxml==5.3.0,MarkupSafe==3.0.2,matplotlib==3.9.2,netconf-client==3.1.1,networkx==2.8.8,numpy==1.26.4,packaging==24.1,pandas==1.5.3,paramiko==3.5.0,pbr==5.11.1,pillow==11.0.0,pip==24.3.1,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pyparsing==3.2.0,pytest==8.3.3,python-dateutil==2.9.0.post0,pytz==2024.2,requests==2.32.3,scipy==1.14.1,setuptools==50.3.2,six==1.16.0,urllib3==2.2.3,Werkzeug==2.0.3,wheel==0.44.0,xlrd==1.2.0 testsPCE: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh pce pytest -q transportpce_tests/pce/test01_pce.py tests_tapi: freeze> python -m pip freeze --all tests_tapi: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.3.1,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 tests_tapi: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh tapi using environment variables from ./karaf221.env pytest -q transportpce_tests/tapi/test01_abstracted_topology.py ............................................ [100%] 20 passed in 126.42s (0:02:06) pytest -q transportpce_tests/pce/test02_pce_400G.py ................. [100%] 9 passed in 41.48s pytest -q transportpce_tests/pce/test03_gnpy.py .............. [100%] 8 passed in 37.51s pytest -q transportpce_tests/pce/test04_pce_bug_fix.py ............ [100%] 50 passed in 219.39s (0:03:39) pytest -q transportpce_tests/tapi/test02_full_topology.py ... [100%] 3 passed in 41.23s build_karaf_tests71: OK ✔ in 54.08 seconds build_karaf_tests_hybrid: OK ✔ in 58.78 seconds testsPCE: OK ✔ in 5 minutes 12.56 seconds tests121: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt tests121: freeze> python -m pip freeze --all tests121: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.3.1,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 tests121: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh 1.2.1 using environment variables from ./karaf121.env pytest -q transportpce_tests/1.2.1/test01_portmapping.py ...........FF........................ [100%] =================================== FAILURES =================================== _____________ TransportPCEtesting.test_12_check_openroadm_topology _____________ self = def test_12_check_openroadm_topology(self): response = test_utils.get_ietf_network_request('openroadm-topology', 'config') self.assertEqual(response['status_code'], requests.codes.ok) > self.assertEqual(len(response['network'][0]['node']), 13, 'There should be 13 openroadm nodes') E AssertionError: 17 != 13 : There should be 13 openroadm nodes transportpce_tests/tapi/test02_full_topology.py:272: AssertionError ____________ TransportPCEtesting.test_13_get_tapi_topology_details _____________ self = def test_13_get_tapi_topology_details(self): self.tapi_topo["topology-id"] = test_utils.T0_FULL_MULTILAYER_TOPO_UUID response = test_utils.transportpce_api_rpc_request( 'tapi-topology', 'get-topology-details', self.tapi_topo) time.sleep(2) self.assertEqual(response['status_code'], requests.codes.ok) > self.assertEqual(len(response['output']['topology']['node']), 8, 'There should be 8 TAPI nodes') E AssertionError: 9 != 8 : There should be 8 TAPI nodes transportpce_tests/tapi/test02_full_topology.py:282: AssertionError =========================== short test summary info ============================ FAILED transportpce_tests/tapi/test02_full_topology.py::TransportPCEtesting::test_12_check_openroadm_topology FAILED transportpce_tests/tapi/test02_full_topology.py::TransportPCEtesting::test_13_get_tapi_topology_details 2 failed, 28 passed in 465.44s (0:07:45) tests_tapi: exit 1 (685.36 seconds) /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh tapi pid=30665 tests_tapi: FAIL ✖ in 11 minutes 37.22 seconds tests71: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt .FFtests71: freeze> python -m pip freeze --all tests71: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.3.1,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 tests71: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh 7.1 using environment variables from ./karaf71.env pytest -q transportpce_tests/7.1/test01_portmapping.py FFFFFFFFFFF [100%] =================================== FAILURES =================================== _________ TransportPCEPortMappingTesting.test_09_xpdr_portmapping_info _________ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: > sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection raise err _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ address = ('localhost', 8182), timeout = 10, source_address = None socket_options = [(6, 1, 1)] def create_connection( address: tuple[str, int], timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, source_address: tuple[str, int] | None = None, socket_options: _TYPE_SOCKET_OPTIONS | None = None, ) -> socket.socket: """Connect to *address* and return the socket object. Convenience function. Connect to *address* (a 2-tuple ``(host, port)``) and return the socket object. Passing the optional *timeout* parameter will set the timeout on the socket instance before attempting to connect. If no *timeout* is supplied, the global default timeout setting returned by :func:`socket.getdefaulttimeout` is used. If *source_address* is set it must be a tuple of (host, port) for the socket to bind as a source address before making the connection. An host of '' or port 0 tells the OS to use the default. """ host, port = address if host.startswith("["): host = host.strip("[]") err = None # Using the value from allowed_gai_family() in the context of getaddrinfo lets # us select whether to work with IPv4 DNS records, IPv6 records, or both. # The original create_connection function always returns all records. family = allowed_gai_family() try: host.encode("idna") except UnicodeError: raise LocationParseError(f"'{host}', label empty or too long") from None for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): af, socktype, proto, canonname, sa = res sock = None try: sock = socket.socket(af, socktype, proto) # If provided, set socket level options before connecting. _set_socket_options(sock, socket_options) if timeout is not _DEFAULT_TIMEOUT: sock.settimeout(timeout) if source_address: sock.bind(source_address) > sock.connect(sa) E ConnectionRefusedError: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError The above exception was the direct cause of the following exception: self = method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info' body = None headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) redirect = False, assert_same_host = False timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None release_conn = False, chunked = False, body_pos = None, preload_content = False decode_content = False, response_kw = {} parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info', query=None, fragment=None) destination_scheme = None, conn = None, release_this_conn = True http_tunnel_required = False, err = None, clean_exit = False def urlopen( # type: ignore[override] self, method: str, url: str, body: _TYPE_BODY | None = None, headers: typing.Mapping[str, str] | None = None, retries: Retry | bool | int | None = None, redirect: bool = True, assert_same_host: bool = True, timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, pool_timeout: int | None = None, release_conn: bool | None = None, chunked: bool = False, body_pos: _TYPE_BODY_POSITION | None = None, preload_content: bool = True, decode_content: bool = True, **response_kw: typing.Any, ) -> BaseHTTPResponse: """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param url: The URL to perform the request on. :param body: Data to send in the request body, either :class:`str`, :class:`bytes`, an iterable of :class:`str`/:class:`bytes`, or a file-like object. :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When ``False``, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param bool preload_content: If True, the response's body will be preloaded into memory. :param bool decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``preload_content`` which defaults to ``True``. :param bool chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. """ parsed_url = parse_url(url) destination_scheme = parsed_url.scheme if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = preload_content # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) # Ensure that the URL we're connecting to is properly encoded if url.startswith("/"): url = to_str(_encode_target(url)) else: url = to_str(parsed_url.url) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn http_tunnel_required = connection_requires_http_tunnel( self.proxy, self.proxy_config, destination_scheme ) # Merge the proxy headers. Only done when not using HTTP CONNECT. We # have to copy the headers dict so we can safely change it without those # changes being reflected in anyone else's copy. if not http_tunnel_required: headers = headers.copy() # type: ignore[attr-defined] headers.update(self.proxy_headers) # type: ignore[union-attr] # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] # Is this a closed/new connection that requires CONNECT tunnelling? if self.proxy is not None and http_tunnel_required and conn.is_closed: try: self._prepare_proxy(conn) except (BaseSSLError, OSError, SocketTimeout) as e: self._raise_timeout( err=e, url=self.proxy.url, timeout_value=conn.timeout ) raise # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Make the request on the HTTPConnection object > response = self._make_request( conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked, retries=retries, response_conn=response_conn, preload_content=preload_content, decode_content=decode_content, **response_kw, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request conn.request( ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request self.endheaders() /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders self._send_output(message_body, encode_chunked=encode_chunked) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output self.send(msg) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send self.connect() ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect self.sock = self._new_conn() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) except socket.gaierror as e: raise NameResolutionError(self.host, self, e) from e except SocketTimeout as e: raise ConnectTimeoutError( self, f"Connection to {self.host} timed out. (connect timeout={self.timeout})", ) from e except OSError as e: > raise NewConnectionError( self, f"Failed to establish a new connection: {e}" ) from e E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError The above exception was the direct cause of the following exception: self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: > resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen retries = retries.increment( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Retry(total=0, connect=None, read=False, redirect=None, status=None) method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info' response = None error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') _pool = _stacktrace = def increment( self, method: str | None = None, url: str | None = None, response: BaseHTTPResponse | None = None, error: Exception | None = None, _pool: ConnectionPool | None = None, _stacktrace: TracebackType | None = None, ) -> Self: """Return a new Retry object with incremented retry counters. :param response: A response object, or None, if the server did not return a response. :type response: :class:`~urllib3.response.BaseHTTPResponse` :param Exception error: An error encountered during the request, or None if the response was received successfully. :return: A new ``Retry`` object. """ if self.total is False and error: # Disabled, indicate to re-raise the error. raise reraise(type(error), error, _stacktrace) total = self.total if total is not None: total -= 1 connect = self.connect read = self.read redirect = self.redirect status_count = self.status other = self.other cause = "unknown" status = None redirect_location = None if error and self._is_connection_error(error): # Connect retry? if connect is False: raise reraise(type(error), error, _stacktrace) elif connect is not None: connect -= 1 elif error and self._is_read_error(error): # Read retry? if read is False or method is None or not self._is_method_retryable(method): raise reraise(type(error), error, _stacktrace) elif read is not None: read -= 1 elif error: # Other retry? if other is not None: other -= 1 elif response and response.get_redirect_location(): # Redirect retry? if redirect is not None: redirect -= 1 cause = "too many redirects" response_redirect_location = response.get_redirect_location() if response_redirect_location: redirect_location = response_redirect_location status = response.status else: # Incrementing because of a server error like a 500 in # status_forcelist and the given method is in the allowed_methods cause = ResponseError.GENERIC_ERROR if response and response.status: if status_count is not None: status_count -= 1 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) status = response.status history = self.history + ( RequestHistory(method, url, error, status, redirect_location), ) new_retry = self.new( total=total, connect=connect, read=read, redirect=redirect, status=status_count, other=other, history=history, ) if new_retry.is_exhausted(): reason = error or ResponseError(cause) > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError During handling of the above exception, another exception occurred: self = def test_09_xpdr_portmapping_info(self): > response = test_utils.get_portmapping_node_attr("XPDRA01", "node-info", None) transportpce_tests/1.2.1/test01_portmapping.py:109: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ transportpce_tests/common/test_utils.py:473: in get_portmapping_node_attr response = get_request(target_url) transportpce_tests/common/test_utils.py:116: in get_request return requests.request( ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request return session.request(method=method, url=url, **kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request resp = self.send(prep, **send_kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send r = adapter.send(request, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) except (ProtocolError, OSError) as err: raise ConnectionError(err, request=request) except MaxRetryError as e: if isinstance(e.reason, ConnectTimeoutError): # TODO: Remove this in 3.0.0: see #2811 if not isinstance(e.reason, NewConnectionError): raise ConnectTimeout(e, request=request) if isinstance(e.reason, ResponseError): raise RetryError(e, request=request) if isinstance(e.reason, _ProxyError): raise ProxyError(e, request=request) if isinstance(e.reason, _SSLError): # This branch is for urllib3 v1.22 and later. raise SSLError(e, request=request) > raise ConnectionError(e, request=request) E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError ----------------------------- Captured stdout call ----------------------------- execution of test_09_xpdr_portmapping_info _______ TransportPCEPortMappingTesting.test_10_xpdr_portmapping_NETWORK1 _______ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: > sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection raise err _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ address = ('localhost', 8182), timeout = 10, source_address = None socket_options = [(6, 1, 1)] def create_connection( address: tuple[str, int], timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, source_address: tuple[str, int] | None = None, socket_options: _TYPE_SOCKET_OPTIONS | None = None, ) -> socket.socket: """Connect to *address* and return the socket object. Convenience function. Connect to *address* (a 2-tuple ``(host, port)``) and return the socket object. Passing the optional *timeout* parameter will set the timeout on the socket instance before attempting to connect. If no *timeout* is supplied, the global default timeout setting returned by :func:`socket.getdefaulttimeout` is used. If *source_address* is set it must be a tuple of (host, port) for the socket to bind as a source address before making the connection. An host of '' or port 0 tells the OS to use the default. """ host, port = address if host.startswith("["): host = host.strip("[]") err = None # Using the value from allowed_gai_family() in the context of getaddrinfo lets # us select whether to work with IPv4 DNS records, IPv6 records, or both. # The original create_connection function always returns all records. family = allowed_gai_family() try: host.encode("idna") except UnicodeError: raise LocationParseError(f"'{host}', label empty or too long") from None for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): af, socktype, proto, canonname, sa = res sock = None try: sock = socket.socket(af, socktype, proto) # If provided, set socket level options before connecting. _set_socket_options(sock, socket_options) if timeout is not _DEFAULT_TIMEOUT: sock.settimeout(timeout) if source_address: sock.bind(source_address) > sock.connect(sa) E ConnectionRefusedError: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError The above exception was the direct cause of the following exception: self = method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK1' body = None headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) redirect = False, assert_same_host = False timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None release_conn = False, chunked = False, body_pos = None, preload_content = False decode_content = False, response_kw = {} parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK1', query=None, fragment=None) destination_scheme = None, conn = None, release_this_conn = True http_tunnel_required = False, err = None, clean_exit = False def urlopen( # type: ignore[override] self, method: str, url: str, body: _TYPE_BODY | None = None, headers: typing.Mapping[str, str] | None = None, retries: Retry | bool | int | None = None, redirect: bool = True, assert_same_host: bool = True, timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, pool_timeout: int | None = None, release_conn: bool | None = None, chunked: bool = False, body_pos: _TYPE_BODY_POSITION | None = None, preload_content: bool = True, decode_content: bool = True, **response_kw: typing.Any, ) -> BaseHTTPResponse: """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param url: The URL to perform the request on. :param body: Data to send in the request body, either :class:`str`, :class:`bytes`, an iterable of :class:`str`/:class:`bytes`, or a file-like object. :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When ``False``, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param bool preload_content: If True, the response's body will be preloaded into memory. :param bool decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``preload_content`` which defaults to ``True``. :param bool chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. """ parsed_url = parse_url(url) destination_scheme = parsed_url.scheme if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = preload_content # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) # Ensure that the URL we're connecting to is properly encoded if url.startswith("/"): url = to_str(_encode_target(url)) else: url = to_str(parsed_url.url) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn http_tunnel_required = connection_requires_http_tunnel( self.proxy, self.proxy_config, destination_scheme ) # Merge the proxy headers. Only done when not using HTTP CONNECT. We # have to copy the headers dict so we can safely change it without those # changes being reflected in anyone else's copy. if not http_tunnel_required: headers = headers.copy() # type: ignore[attr-defined] headers.update(self.proxy_headers) # type: ignore[union-attr] # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] # Is this a closed/new connection that requires CONNECT tunnelling? if self.proxy is not None and http_tunnel_required and conn.is_closed: try: self._prepare_proxy(conn) except (BaseSSLError, OSError, SocketTimeout) as e: self._raise_timeout( err=e, url=self.proxy.url, timeout_value=conn.timeout ) raise # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Make the request on the HTTPConnection object > response = self._make_request( conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked, retries=retries, response_conn=response_conn, preload_content=preload_content, decode_content=decode_content, **response_kw, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request conn.request( ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request self.endheaders() /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders self._send_output(message_body, encode_chunked=encode_chunked) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output self.send(msg) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send self.connect() ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect self.sock = self._new_conn() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) except socket.gaierror as e: raise NameResolutionError(self.host, self, e) from e except SocketTimeout as e: raise ConnectTimeoutError( self, f"Connection to {self.host} timed out. (connect timeout={self.timeout})", ) from e except OSError as e: > raise NewConnectionError( self, f"Failed to establish a new connection: {e}" ) from e E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError The above exception was the direct cause of the following exception: self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: > resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen retries = retries.increment( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Retry(total=0, connect=None, read=False, redirect=None, status=None) method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK1' response = None error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') _pool = _stacktrace = def increment( self, method: str | None = None, url: str | None = None, response: BaseHTTPResponse | None = None, error: Exception | None = None, _pool: ConnectionPool | None = None, _stacktrace: TracebackType | None = None, ) -> Self: """Return a new Retry object with incremented retry counters. :param response: A response object, or None, if the server did not return a response. :type response: :class:`~urllib3.response.BaseHTTPResponse` :param Exception error: An error encountered during the request, or None if the response was received successfully. :return: A new ``Retry`` object. """ if self.total is False and error: # Disabled, indicate to re-raise the error. raise reraise(type(error), error, _stacktrace) total = self.total if total is not None: total -= 1 connect = self.connect read = self.read redirect = self.redirect status_count = self.status other = self.other cause = "unknown" status = None redirect_location = None if error and self._is_connection_error(error): # Connect retry? if connect is False: raise reraise(type(error), error, _stacktrace) elif connect is not None: connect -= 1 elif error and self._is_read_error(error): # Read retry? if read is False or method is None or not self._is_method_retryable(method): raise reraise(type(error), error, _stacktrace) elif read is not None: read -= 1 elif error: # Other retry? if other is not None: other -= 1 elif response and response.get_redirect_location(): # Redirect retry? if redirect is not None: redirect -= 1 cause = "too many redirects" response_redirect_location = response.get_redirect_location() if response_redirect_location: redirect_location = response_redirect_location status = response.status else: # Incrementing because of a server error like a 500 in # status_forcelist and the given method is in the allowed_methods cause = ResponseError.GENERIC_ERROR if response and response.status: if status_count is not None: status_count -= 1 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) status = response.status history = self.history + ( RequestHistory(method, url, error, status, redirect_location), ) new_retry = self.new( total=total, connect=connect, read=read, redirect=redirect, status=status_count, other=other, history=history, ) if new_retry.is_exhausted(): reason = error or ResponseError(cause) > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError During handling of the above exception, another exception occurred: self = def test_10_xpdr_portmapping_NETWORK1(self): > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-NETWORK1") transportpce_tests/1.2.1/test01_portmapping.py:122: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ transportpce_tests/common/test_utils.py:473: in get_portmapping_node_attr response = get_request(target_url) transportpce_tests/common/test_utils.py:116: in get_request return requests.request( ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request return session.request(method=method, url=url, **kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request resp = self.send(prep, **send_kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send r = adapter.send(request, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) except (ProtocolError, OSError) as err: raise ConnectionError(err, request=request) except MaxRetryError as e: if isinstance(e.reason, ConnectTimeoutError): # TODO: Remove this in 3.0.0: see #2811 if not isinstance(e.reason, NewConnectionError): raise ConnectTimeout(e, request=request) if isinstance(e.reason, ResponseError): raise RetryError(e, request=request) if isinstance(e.reason, _ProxyError): raise ProxyError(e, request=request) if isinstance(e.reason, _SSLError): # This branch is for urllib3 v1.22 and later. raise SSLError(e, request=request) > raise ConnectionError(e, request=request) E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError ----------------------------- Captured stdout call ----------------------------- execution of test_10_xpdr_portmapping_NETWORK1 _______ TransportPCEPortMappingTesting.test_11_xpdr_portmapping_NETWORK2 _______ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: > sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection raise err _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ address = ('localhost', 8182), timeout = 10, source_address = None socket_options = [(6, 1, 1)] def create_connection( address: tuple[str, int], timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, source_address: tuple[str, int] | None = None, socket_options: _TYPE_SOCKET_OPTIONS | None = None, ) -> socket.socket: """Connect to *address* and return the socket object. Convenience function. Connect to *address* (a 2-tuple ``(host, port)``) and return the socket object. Passing the optional *timeout* parameter will set the timeout on the socket instance before attempting to connect. If no *timeout* is supplied, the global default timeout setting returned by :func:`socket.getdefaulttimeout` is used. If *source_address* is set it must be a tuple of (host, port) for the socket to bind as a source address before making the connection. An host of '' or port 0 tells the OS to use the default. """ host, port = address if host.startswith("["): host = host.strip("[]") err = None # Using the value from allowed_gai_family() in the context of getaddrinfo lets # us select whether to work with IPv4 DNS records, IPv6 records, or both. # The original create_connection function always returns all records. family = allowed_gai_family() try: host.encode("idna") except UnicodeError: raise LocationParseError(f"'{host}', label empty or too long") from None for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): af, socktype, proto, canonname, sa = res sock = None try: sock = socket.socket(af, socktype, proto) # If provided, set socket level options before connecting. _set_socket_options(sock, socket_options) if timeout is not _DEFAULT_TIMEOUT: sock.settimeout(timeout) if source_address: sock.bind(source_address) > sock.connect(sa) E ConnectionRefusedError: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError The above exception was the direct cause of the following exception: self = method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2' body = None headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) redirect = False, assert_same_host = False timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None release_conn = False, chunked = False, body_pos = None, preload_content = False decode_content = False, response_kw = {} parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2', query=None, fragment=None) destination_scheme = None, conn = None, release_this_conn = True http_tunnel_required = False, err = None, clean_exit = False def urlopen( # type: ignore[override] self, method: str, url: str, body: _TYPE_BODY | None = None, headers: typing.Mapping[str, str] | None = None, retries: Retry | bool | int | None = None, redirect: bool = True, assert_same_host: bool = True, timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, pool_timeout: int | None = None, release_conn: bool | None = None, chunked: bool = False, body_pos: _TYPE_BODY_POSITION | None = None, preload_content: bool = True, decode_content: bool = True, **response_kw: typing.Any, ) -> BaseHTTPResponse: """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param url: The URL to perform the request on. :param body: Data to send in the request body, either :class:`str`, :class:`bytes`, an iterable of :class:`str`/:class:`bytes`, or a file-like object. :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When ``False``, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param bool preload_content: If True, the response's body will be preloaded into memory. :param bool decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``preload_content`` which defaults to ``True``. :param bool chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. """ parsed_url = parse_url(url) destination_scheme = parsed_url.scheme if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = preload_content # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) # Ensure that the URL we're connecting to is properly encoded if url.startswith("/"): url = to_str(_encode_target(url)) else: url = to_str(parsed_url.url) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn http_tunnel_required = connection_requires_http_tunnel( self.proxy, self.proxy_config, destination_scheme ) # Merge the proxy headers. Only done when not using HTTP CONNECT. We # have to copy the headers dict so we can safely change it without those # changes being reflected in anyone else's copy. if not http_tunnel_required: headers = headers.copy() # type: ignore[attr-defined] headers.update(self.proxy_headers) # type: ignore[union-attr] # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] # Is this a closed/new connection that requires CONNECT tunnelling? if self.proxy is not None and http_tunnel_required and conn.is_closed: try: self._prepare_proxy(conn) except (BaseSSLError, OSError, SocketTimeout) as e: self._raise_timeout( err=e, url=self.proxy.url, timeout_value=conn.timeout ) raise # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Make the request on the HTTPConnection object > response = self._make_request( conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked, retries=retries, response_conn=response_conn, preload_content=preload_content, decode_content=decode_content, **response_kw, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request conn.request( ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request self.endheaders() /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders self._send_output(message_body, encode_chunked=encode_chunked) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output self.send(msg) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send self.connect() ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect self.sock = self._new_conn() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) except socket.gaierror as e: raise NameResolutionError(self.host, self, e) from e except SocketTimeout as e: raise ConnectTimeoutError( self, f"Connection to {self.host} timed out. (connect timeout={self.timeout})", ) from e except OSError as e: > raise NewConnectionError( self, f"Failed to establish a new connection: {e}" ) from e E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError The above exception was the direct cause of the following exception: self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: > resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen retries = retries.increment( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Retry(total=0, connect=None, read=False, redirect=None, status=None) method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2' response = None error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') _pool = _stacktrace = def increment( self, method: str | None = None, url: str | None = None, response: BaseHTTPResponse | None = None, error: Exception | None = None, _pool: ConnectionPool | None = None, _stacktrace: TracebackType | None = None, ) -> Self: """Return a new Retry object with incremented retry counters. :param response: A response object, or None, if the server did not return a response. :type response: :class:`~urllib3.response.BaseHTTPResponse` :param Exception error: An error encountered during the request, or None if the response was received successfully. :return: A new ``Retry`` object. """ if self.total is False and error: # Disabled, indicate to re-raise the error. raise reraise(type(error), error, _stacktrace) total = self.total if total is not None: total -= 1 connect = self.connect read = self.read redirect = self.redirect status_count = self.status other = self.other cause = "unknown" status = None redirect_location = None if error and self._is_connection_error(error): # Connect retry? if connect is False: raise reraise(type(error), error, _stacktrace) elif connect is not None: connect -= 1 elif error and self._is_read_error(error): # Read retry? if read is False or method is None or not self._is_method_retryable(method): raise reraise(type(error), error, _stacktrace) elif read is not None: read -= 1 elif error: # Other retry? if other is not None: other -= 1 elif response and response.get_redirect_location(): # Redirect retry? if redirect is not None: redirect -= 1 cause = "too many redirects" response_redirect_location = response.get_redirect_location() if response_redirect_location: redirect_location = response_redirect_location status = response.status else: # Incrementing because of a server error like a 500 in # status_forcelist and the given method is in the allowed_methods cause = ResponseError.GENERIC_ERROR if response and response.status: if status_count is not None: status_count -= 1 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) status = response.status history = self.history + ( RequestHistory(method, url, error, status, redirect_location), ) new_retry = self.new( total=total, connect=connect, read=read, redirect=redirect, status=status_count, other=other, history=history, ) if new_retry.is_exhausted(): reason = error or ResponseError(cause) > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError During handling of the above exception, another exception occurred: self = def test_11_xpdr_portmapping_NETWORK2(self): > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-NETWORK2") transportpce_tests/1.2.1/test01_portmapping.py:133: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ transportpce_tests/common/test_utils.py:473: in get_portmapping_node_attr response = get_request(target_url) transportpce_tests/common/test_utils.py:116: in get_request return requests.request( ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request return session.request(method=method, url=url, **kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request resp = self.send(prep, **send_kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send r = adapter.send(request, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) except (ProtocolError, OSError) as err: raise ConnectionError(err, request=request) except MaxRetryError as e: if isinstance(e.reason, ConnectTimeoutError): # TODO: Remove this in 3.0.0: see #2811 if not isinstance(e.reason, NewConnectionError): raise ConnectTimeout(e, request=request) if isinstance(e.reason, ResponseError): raise RetryError(e, request=request) if isinstance(e.reason, _ProxyError): raise ProxyError(e, request=request) if isinstance(e.reason, _SSLError): # This branch is for urllib3 v1.22 and later. raise SSLError(e, request=request) > raise ConnectionError(e, request=request) E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError ----------------------------- Captured stdout call ----------------------------- execution of test_11_xpdr_portmapping_NETWORK2 _______ TransportPCEPortMappingTesting.test_12_xpdr_portmapping_CLIENT1 ________ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: > sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection raise err _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ address = ('localhost', 8182), timeout = 10, source_address = None socket_options = [(6, 1, 1)] def create_connection( address: tuple[str, int], timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, source_address: tuple[str, int] | None = None, socket_options: _TYPE_SOCKET_OPTIONS | None = None, ) -> socket.socket: """Connect to *address* and return the socket object. Convenience function. Connect to *address* (a 2-tuple ``(host, port)``) and return the socket object. Passing the optional *timeout* parameter will set the timeout on the socket instance before attempting to connect. If no *timeout* is supplied, the global default timeout setting returned by :func:`socket.getdefaulttimeout` is used. If *source_address* is set it must be a tuple of (host, port) for the socket to bind as a source address before making the connection. An host of '' or port 0 tells the OS to use the default. """ host, port = address if host.startswith("["): host = host.strip("[]") err = None # Using the value from allowed_gai_family() in the context of getaddrinfo lets # us select whether to work with IPv4 DNS records, IPv6 records, or both. # The original create_connection function always returns all records. family = allowed_gai_family() try: host.encode("idna") except UnicodeError: raise LocationParseError(f"'{host}', label empty or too long") from None for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): af, socktype, proto, canonname, sa = res sock = None try: sock = socket.socket(af, socktype, proto) # If provided, set socket level options before connecting. _set_socket_options(sock, socket_options) if timeout is not _DEFAULT_TIMEOUT: sock.settimeout(timeout) if source_address: sock.bind(source_address) > sock.connect(sa) E ConnectionRefusedError: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError The above exception was the direct cause of the following exception: self = method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1' body = None headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) redirect = False, assert_same_host = False timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None release_conn = False, chunked = False, body_pos = None, preload_content = False decode_content = False, response_kw = {} parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1', query=None, fragment=None) destination_scheme = None, conn = None, release_this_conn = True http_tunnel_required = False, err = None, clean_exit = False def urlopen( # type: ignore[override] self, method: str, url: str, body: _TYPE_BODY | None = None, headers: typing.Mapping[str, str] | None = None, retries: Retry | bool | int | None = None, redirect: bool = True, assert_same_host: bool = True, timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, pool_timeout: int | None = None, release_conn: bool | None = None, chunked: bool = False, body_pos: _TYPE_BODY_POSITION | None = None, preload_content: bool = True, decode_content: bool = True, **response_kw: typing.Any, ) -> BaseHTTPResponse: """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param url: The URL to perform the request on. :param body: Data to send in the request body, either :class:`str`, :class:`bytes`, an iterable of :class:`str`/:class:`bytes`, or a file-like object. :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When ``False``, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param bool preload_content: If True, the response's body will be preloaded into memory. :param bool decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``preload_content`` which defaults to ``True``. :param bool chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. """ parsed_url = parse_url(url) destination_scheme = parsed_url.scheme if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = preload_content # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) # Ensure that the URL we're connecting to is properly encoded if url.startswith("/"): url = to_str(_encode_target(url)) else: url = to_str(parsed_url.url) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn http_tunnel_required = connection_requires_http_tunnel( self.proxy, self.proxy_config, destination_scheme ) # Merge the proxy headers. Only done when not using HTTP CONNECT. We # have to copy the headers dict so we can safely change it without those # changes being reflected in anyone else's copy. if not http_tunnel_required: headers = headers.copy() # type: ignore[attr-defined] headers.update(self.proxy_headers) # type: ignore[union-attr] # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] # Is this a closed/new connection that requires CONNECT tunnelling? if self.proxy is not None and http_tunnel_required and conn.is_closed: try: self._prepare_proxy(conn) except (BaseSSLError, OSError, SocketTimeout) as e: self._raise_timeout( err=e, url=self.proxy.url, timeout_value=conn.timeout ) raise # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Make the request on the HTTPConnection object > response = self._make_request( conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked, retries=retries, response_conn=response_conn, preload_content=preload_content, decode_content=decode_content, **response_kw, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request conn.request( ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request self.endheaders() /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders self._send_output(message_body, encode_chunked=encode_chunked) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output self.send(msg) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send self.connect() ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect self.sock = self._new_conn() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) except socket.gaierror as e: raise NameResolutionError(self.host, self, e) from e except SocketTimeout as e: raise ConnectTimeoutError( self, f"Connection to {self.host} timed out. (connect timeout={self.timeout})", ) from e except OSError as e: > raise NewConnectionError( self, f"Failed to establish a new connection: {e}" ) from e E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError The above exception was the direct cause of the following exception: self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: > resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen retries = retries.increment( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Retry(total=0, connect=None, read=False, redirect=None, status=None) method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1' response = None error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') _pool = _stacktrace = def increment( self, method: str | None = None, url: str | None = None, response: BaseHTTPResponse | None = None, error: Exception | None = None, _pool: ConnectionPool | None = None, _stacktrace: TracebackType | None = None, ) -> Self: """Return a new Retry object with incremented retry counters. :param response: A response object, or None, if the server did not return a response. :type response: :class:`~urllib3.response.BaseHTTPResponse` :param Exception error: An error encountered during the request, or None if the response was received successfully. :return: A new ``Retry`` object. """ if self.total is False and error: # Disabled, indicate to re-raise the error. raise reraise(type(error), error, _stacktrace) total = self.total if total is not None: total -= 1 connect = self.connect read = self.read redirect = self.redirect status_count = self.status other = self.other cause = "unknown" status = None redirect_location = None if error and self._is_connection_error(error): # Connect retry? if connect is False: raise reraise(type(error), error, _stacktrace) elif connect is not None: connect -= 1 elif error and self._is_read_error(error): # Read retry? if read is False or method is None or not self._is_method_retryable(method): raise reraise(type(error), error, _stacktrace) elif read is not None: read -= 1 elif error: # Other retry? if other is not None: other -= 1 elif response and response.get_redirect_location(): # Redirect retry? if redirect is not None: redirect -= 1 cause = "too many redirects" response_redirect_location = response.get_redirect_location() if response_redirect_location: redirect_location = response_redirect_location status = response.status else: # Incrementing because of a server error like a 500 in # status_forcelist and the given method is in the allowed_methods cause = ResponseError.GENERIC_ERROR if response and response.status: if status_count is not None: status_count -= 1 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) status = response.status history = self.history + ( RequestHistory(method, url, error, status, redirect_location), ) new_retry = self.new( total=total, connect=connect, read=read, redirect=redirect, status=status_count, other=other, history=history, ) if new_retry.is_exhausted(): reason = error or ResponseError(cause) > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError During handling of the above exception, another exception occurred: self = def test_12_xpdr_portmapping_CLIENT1(self): > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT1") transportpce_tests/1.2.1/test01_portmapping.py:144: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ transportpce_tests/common/test_utils.py:473: in get_portmapping_node_attr response = get_request(target_url) transportpce_tests/common/test_utils.py:116: in get_request return requests.request( ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request return session.request(method=method, url=url, **kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request resp = self.send(prep, **send_kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send r = adapter.send(request, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) except (ProtocolError, OSError) as err: raise ConnectionError(err, request=request) except MaxRetryError as e: if isinstance(e.reason, ConnectTimeoutError): # TODO: Remove this in 3.0.0: see #2811 if not isinstance(e.reason, NewConnectionError): raise ConnectTimeout(e, request=request) if isinstance(e.reason, ResponseError): raise RetryError(e, request=request) if isinstance(e.reason, _ProxyError): raise ProxyError(e, request=request) if isinstance(e.reason, _SSLError): # This branch is for urllib3 v1.22 and later. raise SSLError(e, request=request) > raise ConnectionError(e, request=request) E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError ----------------------------- Captured stdout call ----------------------------- execution of test_12_xpdr_portmapping_CLIENT1 _______ TransportPCEPortMappingTesting.test_13_xpdr_portmapping_CLIENT2 ________ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: > sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection raise err _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ address = ('localhost', 8182), timeout = 10, source_address = None socket_options = [(6, 1, 1)] def create_connection( address: tuple[str, int], timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, source_address: tuple[str, int] | None = None, socket_options: _TYPE_SOCKET_OPTIONS | None = None, ) -> socket.socket: """Connect to *address* and return the socket object. Convenience function. Connect to *address* (a 2-tuple ``(host, port)``) and return the socket object. Passing the optional *timeout* parameter will set the timeout on the socket instance before attempting to connect. If no *timeout* is supplied, the global default timeout setting returned by :func:`socket.getdefaulttimeout` is used. If *source_address* is set it must be a tuple of (host, port) for the socket to bind as a source address before making the connection. An host of '' or port 0 tells the OS to use the default. """ host, port = address if host.startswith("["): host = host.strip("[]") err = None # Using the value from allowed_gai_family() in the context of getaddrinfo lets # us select whether to work with IPv4 DNS records, IPv6 records, or both. # The original create_connection function always returns all records. family = allowed_gai_family() try: host.encode("idna") except UnicodeError: raise LocationParseError(f"'{host}', label empty or too long") from None for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): af, socktype, proto, canonname, sa = res sock = None try: sock = socket.socket(af, socktype, proto) # If provided, set socket level options before connecting. _set_socket_options(sock, socket_options) if timeout is not _DEFAULT_TIMEOUT: sock.settimeout(timeout) if source_address: sock.bind(source_address) > sock.connect(sa) E ConnectionRefusedError: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError The above exception was the direct cause of the following exception: self = method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2' body = None headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) redirect = False, assert_same_host = False timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None release_conn = False, chunked = False, body_pos = None, preload_content = False decode_content = False, response_kw = {} parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2', query=None, fragment=None) destination_scheme = None, conn = None, release_this_conn = True http_tunnel_required = False, err = None, clean_exit = False def urlopen( # type: ignore[override] self, method: str, url: str, body: _TYPE_BODY | None = None, headers: typing.Mapping[str, str] | None = None, retries: Retry | bool | int | None = None, redirect: bool = True, assert_same_host: bool = True, timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, pool_timeout: int | None = None, release_conn: bool | None = None, chunked: bool = False, body_pos: _TYPE_BODY_POSITION | None = None, preload_content: bool = True, decode_content: bool = True, **response_kw: typing.Any, ) -> BaseHTTPResponse: """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param url: The URL to perform the request on. :param body: Data to send in the request body, either :class:`str`, :class:`bytes`, an iterable of :class:`str`/:class:`bytes`, or a file-like object. :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When ``False``, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param bool preload_content: If True, the response's body will be preloaded into memory. :param bool decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``preload_content`` which defaults to ``True``. :param bool chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. """ parsed_url = parse_url(url) destination_scheme = parsed_url.scheme if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = preload_content # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) # Ensure that the URL we're connecting to is properly encoded if url.startswith("/"): url = to_str(_encode_target(url)) else: url = to_str(parsed_url.url) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn http_tunnel_required = connection_requires_http_tunnel( self.proxy, self.proxy_config, destination_scheme ) # Merge the proxy headers. Only done when not using HTTP CONNECT. We # have to copy the headers dict so we can safely change it without those # changes being reflected in anyone else's copy. if not http_tunnel_required: headers = headers.copy() # type: ignore[attr-defined] headers.update(self.proxy_headers) # type: ignore[union-attr] # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] # Is this a closed/new connection that requires CONNECT tunnelling? if self.proxy is not None and http_tunnel_required and conn.is_closed: try: self._prepare_proxy(conn) except (BaseSSLError, OSError, SocketTimeout) as e: self._raise_timeout( err=e, url=self.proxy.url, timeout_value=conn.timeout ) raise # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Make the request on the HTTPConnection object > response = self._make_request( conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked, retries=retries, response_conn=response_conn, preload_content=preload_content, decode_content=decode_content, **response_kw, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request conn.request( ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request self.endheaders() /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders self._send_output(message_body, encode_chunked=encode_chunked) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output self.send(msg) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send self.connect() ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect self.sock = self._new_conn() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) except socket.gaierror as e: raise NameResolutionError(self.host, self, e) from e except SocketTimeout as e: raise ConnectTimeoutError( self, f"Connection to {self.host} timed out. (connect timeout={self.timeout})", ) from e except OSError as e: > raise NewConnectionError( self, f"Failed to establish a new connection: {e}" ) from e E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError The above exception was the direct cause of the following exception: self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: > resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen retries = retries.increment( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Retry(total=0, connect=None, read=False, redirect=None, status=None) method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2' response = None error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') _pool = _stacktrace = def increment( self, method: str | None = None, url: str | None = None, response: BaseHTTPResponse | None = None, error: Exception | None = None, _pool: ConnectionPool | None = None, _stacktrace: TracebackType | None = None, ) -> Self: """Return a new Retry object with incremented retry counters. :param response: A response object, or None, if the server did not return a response. :type response: :class:`~urllib3.response.BaseHTTPResponse` :param Exception error: An error encountered during the request, or None if the response was received successfully. :return: A new ``Retry`` object. """ if self.total is False and error: # Disabled, indicate to re-raise the error. raise reraise(type(error), error, _stacktrace) total = self.total if total is not None: total -= 1 connect = self.connect read = self.read redirect = self.redirect status_count = self.status other = self.other cause = "unknown" status = None redirect_location = None if error and self._is_connection_error(error): # Connect retry? if connect is False: raise reraise(type(error), error, _stacktrace) elif connect is not None: connect -= 1 elif error and self._is_read_error(error): # Read retry? if read is False or method is None or not self._is_method_retryable(method): raise reraise(type(error), error, _stacktrace) elif read is not None: read -= 1 elif error: # Other retry? if other is not None: other -= 1 elif response and response.get_redirect_location(): # Redirect retry? if redirect is not None: redirect -= 1 cause = "too many redirects" response_redirect_location = response.get_redirect_location() if response_redirect_location: redirect_location = response_redirect_location status = response.status else: # Incrementing because of a server error like a 500 in # status_forcelist and the given method is in the allowed_methods cause = ResponseError.GENERIC_ERROR if response and response.status: if status_count is not None: status_count -= 1 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) status = response.status history = self.history + ( RequestHistory(method, url, error, status, redirect_location), ) new_retry = self.new( total=total, connect=connect, read=read, redirect=redirect, status=status_count, other=other, history=history, ) if new_retry.is_exhausted(): reason = error or ResponseError(cause) > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError During handling of the above exception, another exception occurred: self = def test_13_xpdr_portmapping_CLIENT2(self): > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT2") transportpce_tests/1.2.1/test01_portmapping.py:156: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ transportpce_tests/common/test_utils.py:473: in get_portmapping_node_attr response = get_request(target_url) transportpce_tests/common/test_utils.py:116: in get_request return requests.request( ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request return session.request(method=method, url=url, **kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request resp = self.send(prep, **send_kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send r = adapter.send(request, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) except (ProtocolError, OSError) as err: raise ConnectionError(err, request=request) except MaxRetryError as e: if isinstance(e.reason, ConnectTimeoutError): # TODO: Remove this in 3.0.0: see #2811 if not isinstance(e.reason, NewConnectionError): raise ConnectTimeout(e, request=request) if isinstance(e.reason, ResponseError): raise RetryError(e, request=request) if isinstance(e.reason, _ProxyError): raise ProxyError(e, request=request) if isinstance(e.reason, _SSLError): # This branch is for urllib3 v1.22 and later. raise SSLError(e, request=request) > raise ConnectionError(e, request=request) E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError ----------------------------- Captured stdout call ----------------------------- execution of test_13_xpdr_portmapping_CLIENT2 _______ TransportPCEPortMappingTesting.test_14_xpdr_portmapping_CLIENT3 ________ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: > sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection raise err _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ address = ('localhost', 8182), timeout = 10, source_address = None socket_options = [(6, 1, 1)] def create_connection( address: tuple[str, int], timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, source_address: tuple[str, int] | None = None, socket_options: _TYPE_SOCKET_OPTIONS | None = None, ) -> socket.socket: """Connect to *address* and return the socket object. Convenience function. Connect to *address* (a 2-tuple ``(host, port)``) and return the socket object. Passing the optional *timeout* parameter will set the timeout on the socket instance before attempting to connect. If no *timeout* is supplied, the global default timeout setting returned by :func:`socket.getdefaulttimeout` is used. If *source_address* is set it must be a tuple of (host, port) for the socket to bind as a source address before making the connection. An host of '' or port 0 tells the OS to use the default. """ host, port = address if host.startswith("["): host = host.strip("[]") err = None # Using the value from allowed_gai_family() in the context of getaddrinfo lets # us select whether to work with IPv4 DNS records, IPv6 records, or both. # The original create_connection function always returns all records. family = allowed_gai_family() try: host.encode("idna") except UnicodeError: raise LocationParseError(f"'{host}', label empty or too long") from None for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): af, socktype, proto, canonname, sa = res sock = None try: sock = socket.socket(af, socktype, proto) # If provided, set socket level options before connecting. _set_socket_options(sock, socket_options) if timeout is not _DEFAULT_TIMEOUT: sock.settimeout(timeout) if source_address: sock.bind(source_address) > sock.connect(sa) E ConnectionRefusedError: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError The above exception was the direct cause of the following exception: self = method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3' body = None headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) redirect = False, assert_same_host = False timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None release_conn = False, chunked = False, body_pos = None, preload_content = False decode_content = False, response_kw = {} parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3', query=None, fragment=None) destination_scheme = None, conn = None, release_this_conn = True http_tunnel_required = False, err = None, clean_exit = False def urlopen( # type: ignore[override] self, method: str, url: str, body: _TYPE_BODY | None = None, headers: typing.Mapping[str, str] | None = None, retries: Retry | bool | int | None = None, redirect: bool = True, assert_same_host: bool = True, timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, pool_timeout: int | None = None, release_conn: bool | None = None, chunked: bool = False, body_pos: _TYPE_BODY_POSITION | None = None, preload_content: bool = True, decode_content: bool = True, **response_kw: typing.Any, ) -> BaseHTTPResponse: """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param url: The URL to perform the request on. :param body: Data to send in the request body, either :class:`str`, :class:`bytes`, an iterable of :class:`str`/:class:`bytes`, or a file-like object. :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When ``False``, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param bool preload_content: If True, the response's body will be preloaded into memory. :param bool decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``preload_content`` which defaults to ``True``. :param bool chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. """ parsed_url = parse_url(url) destination_scheme = parsed_url.scheme if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = preload_content # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) # Ensure that the URL we're connecting to is properly encoded if url.startswith("/"): url = to_str(_encode_target(url)) else: url = to_str(parsed_url.url) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn http_tunnel_required = connection_requires_http_tunnel( self.proxy, self.proxy_config, destination_scheme ) # Merge the proxy headers. Only done when not using HTTP CONNECT. We # have to copy the headers dict so we can safely change it without those # changes being reflected in anyone else's copy. if not http_tunnel_required: headers = headers.copy() # type: ignore[attr-defined] headers.update(self.proxy_headers) # type: ignore[union-attr] # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] # Is this a closed/new connection that requires CONNECT tunnelling? if self.proxy is not None and http_tunnel_required and conn.is_closed: try: self._prepare_proxy(conn) except (BaseSSLError, OSError, SocketTimeout) as e: self._raise_timeout( err=e, url=self.proxy.url, timeout_value=conn.timeout ) raise # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Make the request on the HTTPConnection object > response = self._make_request( conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked, retries=retries, response_conn=response_conn, preload_content=preload_content, decode_content=decode_content, **response_kw, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request conn.request( ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request self.endheaders() /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders self._send_output(message_body, encode_chunked=encode_chunked) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output self.send(msg) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send self.connect() ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect self.sock = self._new_conn() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) except socket.gaierror as e: raise NameResolutionError(self.host, self, e) from e except SocketTimeout as e: raise ConnectTimeoutError( self, f"Connection to {self.host} timed out. (connect timeout={self.timeout})", ) from e except OSError as e: > raise NewConnectionError( self, f"Failed to establish a new connection: {e}" ) from e E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError The above exception was the direct cause of the following exception: self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: > resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen retries = retries.increment( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Retry(total=0, connect=None, read=False, redirect=None, status=None) method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3' response = None error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') _pool = _stacktrace = def increment( self, method: str | None = None, url: str | None = None, response: BaseHTTPResponse | None = None, error: Exception | None = None, _pool: ConnectionPool | None = None, _stacktrace: TracebackType | None = None, ) -> Self: """Return a new Retry object with incremented retry counters. :param response: A response object, or None, if the server did not return a response. :type response: :class:`~urllib3.response.BaseHTTPResponse` :param Exception error: An error encountered during the request, or None if the response was received successfully. :return: A new ``Retry`` object. """ if self.total is False and error: # Disabled, indicate to re-raise the error. raise reraise(type(error), error, _stacktrace) total = self.total if total is not None: total -= 1 connect = self.connect read = self.read redirect = self.redirect status_count = self.status other = self.other cause = "unknown" status = None redirect_location = None if error and self._is_connection_error(error): # Connect retry? if connect is False: raise reraise(type(error), error, _stacktrace) elif connect is not None: connect -= 1 elif error and self._is_read_error(error): # Read retry? if read is False or method is None or not self._is_method_retryable(method): raise reraise(type(error), error, _stacktrace) elif read is not None: read -= 1 elif error: # Other retry? if other is not None: other -= 1 elif response and response.get_redirect_location(): # Redirect retry? if redirect is not None: redirect -= 1 cause = "too many redirects" response_redirect_location = response.get_redirect_location() if response_redirect_location: redirect_location = response_redirect_location status = response.status else: # Incrementing because of a server error like a 500 in # status_forcelist and the given method is in the allowed_methods cause = ResponseError.GENERIC_ERROR if response and response.status: if status_count is not None: status_count -= 1 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) status = response.status history = self.history + ( RequestHistory(method, url, error, status, redirect_location), ) new_retry = self.new( total=total, connect=connect, read=read, redirect=redirect, status=status_count, other=other, history=history, ) if new_retry.is_exhausted(): reason = error or ResponseError(cause) > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError During handling of the above exception, another exception occurred: self = def test_14_xpdr_portmapping_CLIENT3(self): > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT3") transportpce_tests/1.2.1/test01_portmapping.py:168: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ transportpce_tests/common/test_utils.py:473: in get_portmapping_node_attr response = get_request(target_url) transportpce_tests/common/test_utils.py:116: in get_request return requests.request( ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request return session.request(method=method, url=url, **kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request resp = self.send(prep, **send_kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send r = adapter.send(request, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) except (ProtocolError, OSError) as err: raise ConnectionError(err, request=request) except MaxRetryError as e: if isinstance(e.reason, ConnectTimeoutError): # TODO: Remove this in 3.0.0: see #2811 if not isinstance(e.reason, NewConnectionError): raise ConnectTimeout(e, request=request) if isinstance(e.reason, ResponseError): raise RetryError(e, request=request) if isinstance(e.reason, _ProxyError): raise ProxyError(e, request=request) if isinstance(e.reason, _SSLError): # This branch is for urllib3 v1.22 and later. raise SSLError(e, request=request) > raise ConnectionError(e, request=request) E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError ----------------------------- Captured stdout call ----------------------------- execution of test_14_xpdr_portmapping_CLIENT3 _______ TransportPCEPortMappingTesting.test_15_xpdr_portmapping_CLIENT4 ________ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: > sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection raise err _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ address = ('localhost', 8182), timeout = 10, source_address = None socket_options = [(6, 1, 1)] def create_connection( address: tuple[str, int], timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, source_address: tuple[str, int] | None = None, socket_options: _TYPE_SOCKET_OPTIONS | None = None, ) -> socket.socket: """Connect to *address* and return the socket object. Convenience function. Connect to *address* (a 2-tuple ``(host, port)``) and return the socket object. Passing the optional *timeout* parameter will set the timeout on the socket instance before attempting to connect. If no *timeout* is supplied, the global default timeout setting returned by :func:`socket.getdefaulttimeout` is used. If *source_address* is set it must be a tuple of (host, port) for the socket to bind as a source address before making the connection. An host of '' or port 0 tells the OS to use the default. """ host, port = address if host.startswith("["): host = host.strip("[]") err = None # Using the value from allowed_gai_family() in the context of getaddrinfo lets # us select whether to work with IPv4 DNS records, IPv6 records, or both. # The original create_connection function always returns all records. family = allowed_gai_family() try: host.encode("idna") except UnicodeError: raise LocationParseError(f"'{host}', label empty or too long") from None for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): af, socktype, proto, canonname, sa = res sock = None try: sock = socket.socket(af, socktype, proto) # If provided, set socket level options before connecting. _set_socket_options(sock, socket_options) if timeout is not _DEFAULT_TIMEOUT: sock.settimeout(timeout) if source_address: sock.bind(source_address) > sock.connect(sa) E ConnectionRefusedError: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError The above exception was the direct cause of the following exception: self = method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4' body = None headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) redirect = False, assert_same_host = False timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None release_conn = False, chunked = False, body_pos = None, preload_content = False decode_content = False, response_kw = {} parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4', query=None, fragment=None) destination_scheme = None, conn = None, release_this_conn = True http_tunnel_required = False, err = None, clean_exit = False def urlopen( # type: ignore[override] self, method: str, url: str, body: _TYPE_BODY | None = None, headers: typing.Mapping[str, str] | None = None, retries: Retry | bool | int | None = None, redirect: bool = True, assert_same_host: bool = True, timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, pool_timeout: int | None = None, release_conn: bool | None = None, chunked: bool = False, body_pos: _TYPE_BODY_POSITION | None = None, preload_content: bool = True, decode_content: bool = True, **response_kw: typing.Any, ) -> BaseHTTPResponse: """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param url: The URL to perform the request on. :param body: Data to send in the request body, either :class:`str`, :class:`bytes`, an iterable of :class:`str`/:class:`bytes`, or a file-like object. :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When ``False``, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param bool preload_content: If True, the response's body will be preloaded into memory. :param bool decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``preload_content`` which defaults to ``True``. :param bool chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. """ parsed_url = parse_url(url) destination_scheme = parsed_url.scheme if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = preload_content # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) # Ensure that the URL we're connecting to is properly encoded if url.startswith("/"): url = to_str(_encode_target(url)) else: url = to_str(parsed_url.url) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn http_tunnel_required = connection_requires_http_tunnel( self.proxy, self.proxy_config, destination_scheme ) # Merge the proxy headers. Only done when not using HTTP CONNECT. We # have to copy the headers dict so we can safely change it without those # changes being reflected in anyone else's copy. if not http_tunnel_required: headers = headers.copy() # type: ignore[attr-defined] headers.update(self.proxy_headers) # type: ignore[union-attr] # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] # Is this a closed/new connection that requires CONNECT tunnelling? if self.proxy is not None and http_tunnel_required and conn.is_closed: try: self._prepare_proxy(conn) except (BaseSSLError, OSError, SocketTimeout) as e: self._raise_timeout( err=e, url=self.proxy.url, timeout_value=conn.timeout ) raise # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Make the request on the HTTPConnection object > response = self._make_request( conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked, retries=retries, response_conn=response_conn, preload_content=preload_content, decode_content=decode_content, **response_kw, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request conn.request( ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request self.endheaders() /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders self._send_output(message_body, encode_chunked=encode_chunked) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output self.send(msg) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send self.connect() ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect self.sock = self._new_conn() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) except socket.gaierror as e: raise NameResolutionError(self.host, self, e) from e except SocketTimeout as e: raise ConnectTimeoutError( self, f"Connection to {self.host} timed out. (connect timeout={self.timeout})", ) from e except OSError as e: > raise NewConnectionError( self, f"Failed to establish a new connection: {e}" ) from e E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError The above exception was the direct cause of the following exception: self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: > resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen retries = retries.increment( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Retry(total=0, connect=None, read=False, redirect=None, status=None) method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4' response = None error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') _pool = _stacktrace = def increment( self, method: str | None = None, url: str | None = None, response: BaseHTTPResponse | None = None, error: Exception | None = None, _pool: ConnectionPool | None = None, _stacktrace: TracebackType | None = None, ) -> Self: """Return a new Retry object with incremented retry counters. :param response: A response object, or None, if the server did not return a response. :type response: :class:`~urllib3.response.BaseHTTPResponse` :param Exception error: An error encountered during the request, or None if the response was received successfully. :return: A new ``Retry`` object. """ if self.total is False and error: # Disabled, indicate to re-raise the error. raise reraise(type(error), error, _stacktrace) total = self.total if total is not None: total -= 1 connect = self.connect read = self.read redirect = self.redirect status_count = self.status other = self.other cause = "unknown" status = None redirect_location = None if error and self._is_connection_error(error): # Connect retry? if connect is False: raise reraise(type(error), error, _stacktrace) elif connect is not None: connect -= 1 elif error and self._is_read_error(error): # Read retry? if read is False or method is None or not self._is_method_retryable(method): raise reraise(type(error), error, _stacktrace) elif read is not None: read -= 1 elif error: # Other retry? if other is not None: other -= 1 elif response and response.get_redirect_location(): # Redirect retry? if redirect is not None: redirect -= 1 cause = "too many redirects" response_redirect_location = response.get_redirect_location() if response_redirect_location: redirect_location = response_redirect_location status = response.status else: # Incrementing because of a server error like a 500 in # status_forcelist and the given method is in the allowed_methods cause = ResponseError.GENERIC_ERROR if response and response.status: if status_count is not None: status_count -= 1 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) status = response.status history = self.history + ( RequestHistory(method, url, error, status, redirect_location), ) new_retry = self.new( total=total, connect=connect, read=read, redirect=redirect, status=status_count, other=other, history=history, ) if new_retry.is_exhausted(): reason = error or ResponseError(cause) > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError During handling of the above exception, another exception occurred: self = def test_15_xpdr_portmapping_CLIENT4(self): > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT4") transportpce_tests/1.2.1/test01_portmapping.py:180: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ transportpce_tests/common/test_utils.py:473: in get_portmapping_node_attr response = get_request(target_url) transportpce_tests/common/test_utils.py:116: in get_request return requests.request( ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request return session.request(method=method, url=url, **kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request resp = self.send(prep, **send_kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send r = adapter.send(request, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) except (ProtocolError, OSError) as err: raise ConnectionError(err, request=request) except MaxRetryError as e: if isinstance(e.reason, ConnectTimeoutError): # TODO: Remove this in 3.0.0: see #2811 if not isinstance(e.reason, NewConnectionError): raise ConnectTimeout(e, request=request) if isinstance(e.reason, ResponseError): raise RetryError(e, request=request) if isinstance(e.reason, _ProxyError): raise ProxyError(e, request=request) if isinstance(e.reason, _SSLError): # This branch is for urllib3 v1.22 and later. raise SSLError(e, request=request) > raise ConnectionError(e, request=request) E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError ----------------------------- Captured stdout call ----------------------------- execution of test_15_xpdr_portmapping_CLIENT4 _______ TransportPCEPortMappingTesting.test_16_xpdr_device_disconnection _______ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: > sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection raise err _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ address = ('localhost', 8182), timeout = 10, source_address = None socket_options = [(6, 1, 1)] def create_connection( address: tuple[str, int], timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, source_address: tuple[str, int] | None = None, socket_options: _TYPE_SOCKET_OPTIONS | None = None, ) -> socket.socket: """Connect to *address* and return the socket object. Convenience function. Connect to *address* (a 2-tuple ``(host, port)``) and return the socket object. Passing the optional *timeout* parameter will set the timeout on the socket instance before attempting to connect. If no *timeout* is supplied, the global default timeout setting returned by :func:`socket.getdefaulttimeout` is used. If *source_address* is set it must be a tuple of (host, port) for the socket to bind as a source address before making the connection. An host of '' or port 0 tells the OS to use the default. """ host, port = address if host.startswith("["): host = host.strip("[]") err = None # Using the value from allowed_gai_family() in the context of getaddrinfo lets # us select whether to work with IPv4 DNS records, IPv6 records, or both. # The original create_connection function always returns all records. family = allowed_gai_family() try: host.encode("idna") except UnicodeError: raise LocationParseError(f"'{host}', label empty or too long") from None for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): af, socktype, proto, canonname, sa = res sock = None try: sock = socket.socket(af, socktype, proto) # If provided, set socket level options before connecting. _set_socket_options(sock, socket_options) if timeout is not _DEFAULT_TIMEOUT: sock.settimeout(timeout) if source_address: sock.bind(source_address) > sock.connect(sa) E ConnectionRefusedError: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError The above exception was the direct cause of the following exception: self = method = 'DELETE' url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01' body = None headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) redirect = False, assert_same_host = False timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None release_conn = False, chunked = False, body_pos = None, preload_content = False decode_content = False, response_kw = {} parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01', query=None, fragment=None) destination_scheme = None, conn = None, release_this_conn = True http_tunnel_required = False, err = None, clean_exit = False def urlopen( # type: ignore[override] self, method: str, url: str, body: _TYPE_BODY | None = None, headers: typing.Mapping[str, str] | None = None, retries: Retry | bool | int | None = None, redirect: bool = True, assert_same_host: bool = True, timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, pool_timeout: int | None = None, release_conn: bool | None = None, chunked: bool = False, body_pos: _TYPE_BODY_POSITION | None = None, preload_content: bool = True, decode_content: bool = True, **response_kw: typing.Any, ) -> BaseHTTPResponse: """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param url: The URL to perform the request on. :param body: Data to send in the request body, either :class:`str`, :class:`bytes`, an iterable of :class:`str`/:class:`bytes`, or a file-like object. :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When ``False``, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param bool preload_content: If True, the response's body will be preloaded into memory. :param bool decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``preload_content`` which defaults to ``True``. :param bool chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. """ parsed_url = parse_url(url) destination_scheme = parsed_url.scheme if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = preload_content # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) # Ensure that the URL we're connecting to is properly encoded if url.startswith("/"): url = to_str(_encode_target(url)) else: url = to_str(parsed_url.url) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn http_tunnel_required = connection_requires_http_tunnel( self.proxy, self.proxy_config, destination_scheme ) # Merge the proxy headers. Only done when not using HTTP CONNECT. We # have to copy the headers dict so we can safely change it without those # changes being reflected in anyone else's copy. if not http_tunnel_required: headers = headers.copy() # type: ignore[attr-defined] headers.update(self.proxy_headers) # type: ignore[union-attr] # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] # Is this a closed/new connection that requires CONNECT tunnelling? if self.proxy is not None and http_tunnel_required and conn.is_closed: try: self._prepare_proxy(conn) except (BaseSSLError, OSError, SocketTimeout) as e: self._raise_timeout( err=e, url=self.proxy.url, timeout_value=conn.timeout ) raise # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Make the request on the HTTPConnection object > response = self._make_request( conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked, retries=retries, response_conn=response_conn, preload_content=preload_content, decode_content=decode_content, **response_kw, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request conn.request( ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request self.endheaders() /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders self._send_output(message_body, encode_chunked=encode_chunked) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output self.send(msg) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send self.connect() ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect self.sock = self._new_conn() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) except socket.gaierror as e: raise NameResolutionError(self.host, self, e) from e except SocketTimeout as e: raise ConnectTimeoutError( self, f"Connection to {self.host} timed out. (connect timeout={self.timeout})", ) from e except OSError as e: > raise NewConnectionError( self, f"Failed to establish a new connection: {e}" ) from e E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError The above exception was the direct cause of the following exception: self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: > resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen retries = retries.increment( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Retry(total=0, connect=None, read=False, redirect=None, status=None) method = 'DELETE' url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01' response = None error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') _pool = _stacktrace = def increment( self, method: str | None = None, url: str | None = None, response: BaseHTTPResponse | None = None, error: Exception | None = None, _pool: ConnectionPool | None = None, _stacktrace: TracebackType | None = None, ) -> Self: """Return a new Retry object with incremented retry counters. :param response: A response object, or None, if the server did not return a response. :type response: :class:`~urllib3.response.BaseHTTPResponse` :param Exception error: An error encountered during the request, or None if the response was received successfully. :return: A new ``Retry`` object. """ if self.total is False and error: # Disabled, indicate to re-raise the error. raise reraise(type(error), error, _stacktrace) total = self.total if total is not None: total -= 1 connect = self.connect read = self.read redirect = self.redirect status_count = self.status other = self.other cause = "unknown" status = None redirect_location = None if error and self._is_connection_error(error): # Connect retry? if connect is False: raise reraise(type(error), error, _stacktrace) elif connect is not None: connect -= 1 elif error and self._is_read_error(error): # Read retry? if read is False or method is None or not self._is_method_retryable(method): raise reraise(type(error), error, _stacktrace) elif read is not None: read -= 1 elif error: # Other retry? if other is not None: other -= 1 elif response and response.get_redirect_location(): # Redirect retry? if redirect is not None: redirect -= 1 cause = "too many redirects" response_redirect_location = response.get_redirect_location() if response_redirect_location: redirect_location = response_redirect_location status = response.status else: # Incrementing because of a server error like a 500 in # status_forcelist and the given method is in the allowed_methods cause = ResponseError.GENERIC_ERROR if response and response.status: if status_count is not None: status_count -= 1 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) status = response.status history = self.history + ( RequestHistory(method, url, error, status, redirect_location), ) new_retry = self.new( total=total, connect=connect, read=read, redirect=redirect, status=status_count, other=other, history=history, ) if new_retry.is_exhausted(): reason = error or ResponseError(cause) > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError During handling of the above exception, another exception occurred: self = def test_16_xpdr_device_disconnection(self): > response = test_utils.unmount_device("XPDRA01") transportpce_tests/1.2.1/test01_portmapping.py:191: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ transportpce_tests/common/test_utils.py:360: in unmount_device response = delete_request(url[RESTCONF_VERSION].format('{}', node)) transportpce_tests/common/test_utils.py:133: in delete_request return requests.request( ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request return session.request(method=method, url=url, **kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request resp = self.send(prep, **send_kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send r = adapter.send(request, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) except (ProtocolError, OSError) as err: raise ConnectionError(err, request=request) except MaxRetryError as e: if isinstance(e.reason, ConnectTimeoutError): # TODO: Remove this in 3.0.0: see #2811 if not isinstance(e.reason, NewConnectionError): raise ConnectTimeout(e, request=request) if isinstance(e.reason, ResponseError): raise RetryError(e, request=request) if isinstance(e.reason, _ProxyError): raise ProxyError(e, request=request) if isinstance(e.reason, _SSLError): # This branch is for urllib3 v1.22 and later. raise SSLError(e, request=request) > raise ConnectionError(e, request=request) E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError ----------------------------- Captured stdout call ----------------------------- execution of test_16_xpdr_device_disconnection _______ TransportPCEPortMappingTesting.test_17_xpdr_device_disconnected ________ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: > sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection raise err _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ address = ('localhost', 8182), timeout = 10, source_address = None socket_options = [(6, 1, 1)] def create_connection( address: tuple[str, int], timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, source_address: tuple[str, int] | None = None, socket_options: _TYPE_SOCKET_OPTIONS | None = None, ) -> socket.socket: """Connect to *address* and return the socket object. Convenience function. Connect to *address* (a 2-tuple ``(host, port)``) and return the socket object. Passing the optional *timeout* parameter will set the timeout on the socket instance before attempting to connect. If no *timeout* is supplied, the global default timeout setting returned by :func:`socket.getdefaulttimeout` is used. If *source_address* is set it must be a tuple of (host, port) for the socket to bind as a source address before making the connection. An host of '' or port 0 tells the OS to use the default. """ host, port = address if host.startswith("["): host = host.strip("[]") err = None # Using the value from allowed_gai_family() in the context of getaddrinfo lets # us select whether to work with IPv4 DNS records, IPv6 records, or both. # The original create_connection function always returns all records. family = allowed_gai_family() try: host.encode("idna") except UnicodeError: raise LocationParseError(f"'{host}', label empty or too long") from None for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): af, socktype, proto, canonname, sa = res sock = None try: sock = socket.socket(af, socktype, proto) # If provided, set socket level options before connecting. _set_socket_options(sock, socket_options) if timeout is not _DEFAULT_TIMEOUT: sock.settimeout(timeout) if source_address: sock.bind(source_address) > sock.connect(sa) E ConnectionRefusedError: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError The above exception was the direct cause of the following exception: self = method = 'GET' url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig' body = None headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) redirect = False, assert_same_host = False timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None release_conn = False, chunked = False, body_pos = None, preload_content = False decode_content = False, response_kw = {} parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01', query='content=nonconfig', fragment=None) destination_scheme = None, conn = None, release_this_conn = True http_tunnel_required = False, err = None, clean_exit = False def urlopen( # type: ignore[override] self, method: str, url: str, body: _TYPE_BODY | None = None, headers: typing.Mapping[str, str] | None = None, retries: Retry | bool | int | None = None, redirect: bool = True, assert_same_host: bool = True, timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, pool_timeout: int | None = None, release_conn: bool | None = None, chunked: bool = False, body_pos: _TYPE_BODY_POSITION | None = None, preload_content: bool = True, decode_content: bool = True, **response_kw: typing.Any, ) -> BaseHTTPResponse: """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param url: The URL to perform the request on. :param body: Data to send in the request body, either :class:`str`, :class:`bytes`, an iterable of :class:`str`/:class:`bytes`, or a file-like object. :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When ``False``, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param bool preload_content: If True, the response's body will be preloaded into memory. :param bool decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``preload_content`` which defaults to ``True``. :param bool chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. """ parsed_url = parse_url(url) destination_scheme = parsed_url.scheme if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = preload_content # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) # Ensure that the URL we're connecting to is properly encoded if url.startswith("/"): url = to_str(_encode_target(url)) else: url = to_str(parsed_url.url) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn http_tunnel_required = connection_requires_http_tunnel( self.proxy, self.proxy_config, destination_scheme ) # Merge the proxy headers. Only done when not using HTTP CONNECT. We # have to copy the headers dict so we can safely change it without those # changes being reflected in anyone else's copy. if not http_tunnel_required: headers = headers.copy() # type: ignore[attr-defined] headers.update(self.proxy_headers) # type: ignore[union-attr] # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] # Is this a closed/new connection that requires CONNECT tunnelling? if self.proxy is not None and http_tunnel_required and conn.is_closed: try: self._prepare_proxy(conn) except (BaseSSLError, OSError, SocketTimeout) as e: self._raise_timeout( err=e, url=self.proxy.url, timeout_value=conn.timeout ) raise # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Make the request on the HTTPConnection object > response = self._make_request( conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked, retries=retries, response_conn=response_conn, preload_content=preload_content, decode_content=decode_content, **response_kw, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request conn.request( ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request self.endheaders() /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders self._send_output(message_body, encode_chunked=encode_chunked) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output self.send(msg) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send self.connect() ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect self.sock = self._new_conn() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) except socket.gaierror as e: raise NameResolutionError(self.host, self, e) from e except SocketTimeout as e: raise ConnectTimeoutError( self, f"Connection to {self.host} timed out. (connect timeout={self.timeout})", ) from e except OSError as e: > raise NewConnectionError( self, f"Failed to establish a new connection: {e}" ) from e E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError The above exception was the direct cause of the following exception: self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: > resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen retries = retries.increment( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Retry(total=0, connect=None, read=False, redirect=None, status=None) method = 'GET' url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig' response = None error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') _pool = _stacktrace = def increment( self, method: str | None = None, url: str | None = None, response: BaseHTTPResponse | None = None, error: Exception | None = None, _pool: ConnectionPool | None = None, _stacktrace: TracebackType | None = None, ) -> Self: """Return a new Retry object with incremented retry counters. :param response: A response object, or None, if the server did not return a response. :type response: :class:`~urllib3.response.BaseHTTPResponse` :param Exception error: An error encountered during the request, or None if the response was received successfully. :return: A new ``Retry`` object. """ if self.total is False and error: # Disabled, indicate to re-raise the error. raise reraise(type(error), error, _stacktrace) total = self.total if total is not None: total -= 1 connect = self.connect read = self.read redirect = self.redirect status_count = self.status other = self.other cause = "unknown" status = None redirect_location = None if error and self._is_connection_error(error): # Connect retry? if connect is False: raise reraise(type(error), error, _stacktrace) elif connect is not None: connect -= 1 elif error and self._is_read_error(error): # Read retry? if read is False or method is None or not self._is_method_retryable(method): raise reraise(type(error), error, _stacktrace) elif read is not None: read -= 1 elif error: # Other retry? if other is not None: other -= 1 elif response and response.get_redirect_location(): # Redirect retry? if redirect is not None: redirect -= 1 cause = "too many redirects" response_redirect_location = response.get_redirect_location() if response_redirect_location: redirect_location = response_redirect_location status = response.status else: # Incrementing because of a server error like a 500 in # status_forcelist and the given method is in the allowed_methods cause = ResponseError.GENERIC_ERROR if response and response.status: if status_count is not None: status_count -= 1 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) status = response.status history = self.history + ( RequestHistory(method, url, error, status, redirect_location), ) new_retry = self.new( total=total, connect=connect, read=read, redirect=redirect, status=status_count, other=other, history=history, ) if new_retry.is_exhausted(): reason = error or ResponseError(cause) > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError During handling of the above exception, another exception occurred: self = def test_17_xpdr_device_disconnected(self): > response = test_utils.check_device_connection("XPDRA01") transportpce_tests/1.2.1/test01_portmapping.py:195: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ transportpce_tests/common/test_utils.py:371: in check_device_connection response = get_request(url[RESTCONF_VERSION].format('{}', node)) transportpce_tests/common/test_utils.py:116: in get_request return requests.request( ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request return session.request(method=method, url=url, **kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request resp = self.send(prep, **send_kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send r = adapter.send(request, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) except (ProtocolError, OSError) as err: raise ConnectionError(err, request=request) except MaxRetryError as e: if isinstance(e.reason, ConnectTimeoutError): # TODO: Remove this in 3.0.0: see #2811 if not isinstance(e.reason, NewConnectionError): raise ConnectTimeout(e, request=request) if isinstance(e.reason, ResponseError): raise RetryError(e, request=request) if isinstance(e.reason, _ProxyError): raise ProxyError(e, request=request) if isinstance(e.reason, _SSLError): # This branch is for urllib3 v1.22 and later. raise SSLError(e, request=request) > raise ConnectionError(e, request=request) E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError ----------------------------- Captured stdout call ----------------------------- execution of test_17_xpdr_device_disconnected _______ TransportPCEPortMappingTesting.test_18_xpdr_device_not_connected _______ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: > sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection raise err _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ address = ('localhost', 8182), timeout = 10, source_address = None socket_options = [(6, 1, 1)] def create_connection( address: tuple[str, int], timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, source_address: tuple[str, int] | None = None, socket_options: _TYPE_SOCKET_OPTIONS | None = None, ) -> socket.socket: """Connect to *address* and return the socket object. Convenience function. Connect to *address* (a 2-tuple ``(host, port)``) and return the socket object. Passing the optional *timeout* parameter will set the timeout on the socket instance before attempting to connect. If no *timeout* is supplied, the global default timeout setting returned by :func:`socket.getdefaulttimeout` is used. If *source_address* is set it must be a tuple of (host, port) for the socket to bind as a source address before making the connection. An host of '' or port 0 tells the OS to use the default. """ host, port = address if host.startswith("["): host = host.strip("[]") err = None # Using the value from allowed_gai_family() in the context of getaddrinfo lets # us select whether to work with IPv4 DNS records, IPv6 records, or both. # The original create_connection function always returns all records. family = allowed_gai_family() try: host.encode("idna") except UnicodeError: raise LocationParseError(f"'{host}', label empty or too long") from None for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): af, socktype, proto, canonname, sa = res sock = None try: sock = socket.socket(af, socktype, proto) # If provided, set socket level options before connecting. _set_socket_options(sock, socket_options) if timeout is not _DEFAULT_TIMEOUT: sock.settimeout(timeout) if source_address: sock.bind(source_address) > sock.connect(sa) E ConnectionRefusedError: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError The above exception was the direct cause of the following exception: self = method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info' body = None headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) redirect = False, assert_same_host = False timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None release_conn = False, chunked = False, body_pos = None, preload_content = False decode_content = False, response_kw = {} parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info', query=None, fragment=None) destination_scheme = None, conn = None, release_this_conn = True http_tunnel_required = False, err = None, clean_exit = False def urlopen( # type: ignore[override] self, method: str, url: str, body: _TYPE_BODY | None = None, headers: typing.Mapping[str, str] | None = None, retries: Retry | bool | int | None = None, redirect: bool = True, assert_same_host: bool = True, timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, pool_timeout: int | None = None, release_conn: bool | None = None, chunked: bool = False, body_pos: _TYPE_BODY_POSITION | None = None, preload_content: bool = True, decode_content: bool = True, **response_kw: typing.Any, ) -> BaseHTTPResponse: """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param url: The URL to perform the request on. :param body: Data to send in the request body, either :class:`str`, :class:`bytes`, an iterable of :class:`str`/:class:`bytes`, or a file-like object. :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When ``False``, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param bool preload_content: If True, the response's body will be preloaded into memory. :param bool decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``preload_content`` which defaults to ``True``. :param bool chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. """ parsed_url = parse_url(url) destination_scheme = parsed_url.scheme if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = preload_content # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) # Ensure that the URL we're connecting to is properly encoded if url.startswith("/"): url = to_str(_encode_target(url)) else: url = to_str(parsed_url.url) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn http_tunnel_required = connection_requires_http_tunnel( self.proxy, self.proxy_config, destination_scheme ) # Merge the proxy headers. Only done when not using HTTP CONNECT. We # have to copy the headers dict so we can safely change it without those # changes being reflected in anyone else's copy. if not http_tunnel_required: headers = headers.copy() # type: ignore[attr-defined] headers.update(self.proxy_headers) # type: ignore[union-attr] # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] # Is this a closed/new connection that requires CONNECT tunnelling? if self.proxy is not None and http_tunnel_required and conn.is_closed: try: self._prepare_proxy(conn) except (BaseSSLError, OSError, SocketTimeout) as e: self._raise_timeout( err=e, url=self.proxy.url, timeout_value=conn.timeout ) raise # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Make the request on the HTTPConnection object > response = self._make_request( conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked, retries=retries, response_conn=response_conn, preload_content=preload_content, decode_content=decode_content, **response_kw, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request conn.request( ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request self.endheaders() /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders self._send_output(message_body, encode_chunked=encode_chunked) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output self.send(msg) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send self.connect() ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect self.sock = self._new_conn() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) except socket.gaierror as e: raise NameResolutionError(self.host, self, e) from e except SocketTimeout as e: raise ConnectTimeoutError( self, f"Connection to {self.host} timed out. (connect timeout={self.timeout})", ) from e except OSError as e: > raise NewConnectionError( self, f"Failed to establish a new connection: {e}" ) from e E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError The above exception was the direct cause of the following exception: self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: > resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen retries = retries.increment( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Retry(total=0, connect=None, read=False, redirect=None, status=None) method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info' response = None error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') _pool = _stacktrace = def increment( self, method: str | None = None, url: str | None = None, response: BaseHTTPResponse | None = None, error: Exception | None = None, _pool: ConnectionPool | None = None, _stacktrace: TracebackType | None = None, ) -> Self: """Return a new Retry object with incremented retry counters. :param response: A response object, or None, if the server did not return a response. :type response: :class:`~urllib3.response.BaseHTTPResponse` :param Exception error: An error encountered during the request, or None if the response was received successfully. :return: A new ``Retry`` object. """ if self.total is False and error: # Disabled, indicate to re-raise the error. raise reraise(type(error), error, _stacktrace) total = self.total if total is not None: total -= 1 connect = self.connect read = self.read redirect = self.redirect status_count = self.status other = self.other cause = "unknown" status = None redirect_location = None if error and self._is_connection_error(error): # Connect retry? if connect is False: raise reraise(type(error), error, _stacktrace) elif connect is not None: connect -= 1 elif error and self._is_read_error(error): # Read retry? if read is False or method is None or not self._is_method_retryable(method): raise reraise(type(error), error, _stacktrace) elif read is not None: read -= 1 elif error: # Other retry? if other is not None: other -= 1 elif response and response.get_redirect_location(): # Redirect retry? if redirect is not None: redirect -= 1 cause = "too many redirects" response_redirect_location = response.get_redirect_location() if response_redirect_location: redirect_location = response_redirect_location status = response.status else: # Incrementing because of a server error like a 500 in # status_forcelist and the given method is in the allowed_methods cause = ResponseError.GENERIC_ERROR if response and response.status: if status_count is not None: status_count -= 1 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) status = response.status history = self.history + ( RequestHistory(method, url, error, status, redirect_location), ) new_retry = self.new( total=total, connect=connect, read=read, redirect=redirect, status=status_count, other=other, history=history, ) if new_retry.is_exhausted(): reason = error or ResponseError(cause) > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError During handling of the above exception, another exception occurred: self = def test_18_xpdr_device_not_connected(self): > response = test_utils.get_portmapping_node_attr("XPDRA01", "node-info", None) transportpce_tests/1.2.1/test01_portmapping.py:203: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ transportpce_tests/common/test_utils.py:473: in get_portmapping_node_attr response = get_request(target_url) transportpce_tests/common/test_utils.py:116: in get_request return requests.request( ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request return session.request(method=method, url=url, **kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request resp = self.send(prep, **send_kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send r = adapter.send(request, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) except (ProtocolError, OSError) as err: raise ConnectionError(err, request=request) except MaxRetryError as e: if isinstance(e.reason, ConnectTimeoutError): # TODO: Remove this in 3.0.0: see #2811 if not isinstance(e.reason, NewConnectionError): raise ConnectTimeout(e, request=request) if isinstance(e.reason, ResponseError): raise RetryError(e, request=request) if isinstance(e.reason, _ProxyError): raise ProxyError(e, request=request) if isinstance(e.reason, _SSLError): # This branch is for urllib3 v1.22 and later. raise SSLError(e, request=request) > raise ConnectionError(e, request=request) E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError ----------------------------- Captured stdout call ----------------------------- execution of test_18_xpdr_device_not_connected _______ TransportPCEPortMappingTesting.test_19_rdm_device_disconnection ________ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: > sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection raise err _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ address = ('localhost', 8182), timeout = 10, source_address = None socket_options = [(6, 1, 1)] def create_connection( address: tuple[str, int], timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, source_address: tuple[str, int] | None = None, socket_options: _TYPE_SOCKET_OPTIONS | None = None, ) -> socket.socket: """Connect to *address* and return the socket object. Convenience function. Connect to *address* (a 2-tuple ``(host, port)``) and return the socket object. Passing the optional *timeout* parameter will set the timeout on the socket instance before attempting to connect. If no *timeout* is supplied, the global default timeout setting returned by :func:`socket.getdefaulttimeout` is used. If *source_address* is set it must be a tuple of (host, port) for the socket to bind as a source address before making the connection. An host of '' or port 0 tells the OS to use the default. """ host, port = address if host.startswith("["): host = host.strip("[]") err = None # Using the value from allowed_gai_family() in the context of getaddrinfo lets # us select whether to work with IPv4 DNS records, IPv6 records, or both. # The original create_connection function always returns all records. family = allowed_gai_family() try: host.encode("idna") except UnicodeError: raise LocationParseError(f"'{host}', label empty or too long") from None for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): af, socktype, proto, canonname, sa = res sock = None try: sock = socket.socket(af, socktype, proto) # If provided, set socket level options before connecting. _set_socket_options(sock, socket_options) if timeout is not _DEFAULT_TIMEOUT: sock.settimeout(timeout) if source_address: sock.bind(source_address) > sock.connect(sa) E ConnectionRefusedError: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError The above exception was the direct cause of the following exception: self = method = 'DELETE' url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01' body = None headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) redirect = False, assert_same_host = False timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None release_conn = False, chunked = False, body_pos = None, preload_content = False decode_content = False, response_kw = {} parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01', query=None, fragment=None) destination_scheme = None, conn = None, release_this_conn = True http_tunnel_required = False, err = None, clean_exit = False def urlopen( # type: ignore[override] self, method: str, url: str, body: _TYPE_BODY | None = None, headers: typing.Mapping[str, str] | None = None, retries: Retry | bool | int | None = None, redirect: bool = True, assert_same_host: bool = True, timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, pool_timeout: int | None = None, release_conn: bool | None = None, chunked: bool = False, body_pos: _TYPE_BODY_POSITION | None = None, preload_content: bool = True, decode_content: bool = True, **response_kw: typing.Any, ) -> BaseHTTPResponse: """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param url: The URL to perform the request on. :param body: Data to send in the request body, either :class:`str`, :class:`bytes`, an iterable of :class:`str`/:class:`bytes`, or a file-like object. :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When ``False``, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param bool preload_content: If True, the response's body will be preloaded into memory. :param bool decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``preload_content`` which defaults to ``True``. :param bool chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. """ parsed_url = parse_url(url) destination_scheme = parsed_url.scheme if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = preload_content # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) # Ensure that the URL we're connecting to is properly encoded if url.startswith("/"): url = to_str(_encode_target(url)) else: url = to_str(parsed_url.url) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn http_tunnel_required = connection_requires_http_tunnel( self.proxy, self.proxy_config, destination_scheme ) # Merge the proxy headers. Only done when not using HTTP CONNECT. We # have to copy the headers dict so we can safely change it without those # changes being reflected in anyone else's copy. if not http_tunnel_required: headers = headers.copy() # type: ignore[attr-defined] headers.update(self.proxy_headers) # type: ignore[union-attr] # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] # Is this a closed/new connection that requires CONNECT tunnelling? if self.proxy is not None and http_tunnel_required and conn.is_closed: try: self._prepare_proxy(conn) except (BaseSSLError, OSError, SocketTimeout) as e: self._raise_timeout( err=e, url=self.proxy.url, timeout_value=conn.timeout ) raise # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Make the request on the HTTPConnection object > response = self._make_request( conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked, retries=retries, response_conn=response_conn, preload_content=preload_content, decode_content=decode_content, **response_kw, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request conn.request( ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request self.endheaders() /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders self._send_output(message_body, encode_chunked=encode_chunked) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output self.send(msg) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send self.connect() ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect self.sock = self._new_conn() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) except socket.gaierror as e: raise NameResolutionError(self.host, self, e) from e except SocketTimeout as e: raise ConnectTimeoutError( self, f"Connection to {self.host} timed out. (connect timeout={self.timeout})", ) from e except OSError as e: > raise NewConnectionError( self, f"Failed to establish a new connection: {e}" ) from e E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError The above exception was the direct cause of the following exception: self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: > resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen retries = retries.increment( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Retry(total=0, connect=None, read=False, redirect=None, status=None) method = 'DELETE' url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01' response = None error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') _pool = _stacktrace = def increment( self, method: str | None = None, url: str | None = None, response: BaseHTTPResponse | None = None, error: Exception | None = None, _pool: ConnectionPool | None = None, _stacktrace: TracebackType | None = None, ) -> Self: """Return a new Retry object with incremented retry counters. :param response: A response object, or None, if the server did not return a response. :type response: :class:`~urllib3.response.BaseHTTPResponse` :param Exception error: An error encountered during the request, or None if the response was received successfully. :return: A new ``Retry`` object. """ if self.total is False and error: # Disabled, indicate to re-raise the error. raise reraise(type(error), error, _stacktrace) total = self.total if total is not None: total -= 1 connect = self.connect read = self.read redirect = self.redirect status_count = self.status other = self.other cause = "unknown" status = None redirect_location = None if error and self._is_connection_error(error): # Connect retry? if connect is False: raise reraise(type(error), error, _stacktrace) elif connect is not None: connect -= 1 elif error and self._is_read_error(error): # Read retry? if read is False or method is None or not self._is_method_retryable(method): raise reraise(type(error), error, _stacktrace) elif read is not None: read -= 1 elif error: # Other retry? if other is not None: other -= 1 elif response and response.get_redirect_location(): # Redirect retry? if redirect is not None: redirect -= 1 cause = "too many redirects" response_redirect_location = response.get_redirect_location() if response_redirect_location: redirect_location = response_redirect_location status = response.status else: # Incrementing because of a server error like a 500 in # status_forcelist and the given method is in the allowed_methods cause = ResponseError.GENERIC_ERROR if response and response.status: if status_count is not None: status_count -= 1 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) status = response.status history = self.history + ( RequestHistory(method, url, error, status, redirect_location), ) new_retry = self.new( total=total, connect=connect, read=read, redirect=redirect, status=status_count, other=other, history=history, ) if new_retry.is_exhausted(): reason = error or ResponseError(cause) > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError During handling of the above exception, another exception occurred: self = def test_19_rdm_device_disconnection(self): > response = test_utils.unmount_device("ROADMA01") transportpce_tests/1.2.1/test01_portmapping.py:211: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ transportpce_tests/common/test_utils.py:360: in unmount_device response = delete_request(url[RESTCONF_VERSION].format('{}', node)) transportpce_tests/common/test_utils.py:133: in delete_request return requests.request( ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request return session.request(method=method, url=url, **kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request resp = self.send(prep, **send_kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send r = adapter.send(request, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) except (ProtocolError, OSError) as err: raise ConnectionError(err, request=request) except MaxRetryError as e: if isinstance(e.reason, ConnectTimeoutError): # TODO: Remove this in 3.0.0: see #2811 if not isinstance(e.reason, NewConnectionError): raise ConnectTimeout(e, request=request) if isinstance(e.reason, ResponseError): raise RetryError(e, request=request) if isinstance(e.reason, _ProxyError): raise ProxyError(e, request=request) if isinstance(e.reason, _SSLError): # This branch is for urllib3 v1.22 and later. raise SSLError(e, request=request) > raise ConnectionError(e, request=request) E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError ----------------------------- Captured stdout call ----------------------------- execution of test_19_rdm_device_disconnection ________ TransportPCEPortMappingTesting.test_20_rdm_device_disconnected ________ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: > sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection raise err _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ address = ('localhost', 8182), timeout = 10, source_address = None socket_options = [(6, 1, 1)] def create_connection( address: tuple[str, int], timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, source_address: tuple[str, int] | None = None, socket_options: _TYPE_SOCKET_OPTIONS | None = None, ) -> socket.socket: """Connect to *address* and return the socket object. Convenience function. Connect to *address* (a 2-tuple ``(host, port)``) and return the socket object. Passing the optional *timeout* parameter will set the timeout on the socket instance before attempting to connect. If no *timeout* is supplied, the global default timeout setting returned by :func:`socket.getdefaulttimeout` is used. If *source_address* is set it must be a tuple of (host, port) for the socket to bind as a source address before making the connection. An host of '' or port 0 tells the OS to use the default. """ host, port = address if host.startswith("["): host = host.strip("[]") err = None # Using the value from allowed_gai_family() in the context of getaddrinfo lets # us select whether to work with IPv4 DNS records, IPv6 records, or both. # The original create_connection function always returns all records. family = allowed_gai_family() try: host.encode("idna") except UnicodeError: raise LocationParseError(f"'{host}', label empty or too long") from None for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): af, socktype, proto, canonname, sa = res sock = None try: sock = socket.socket(af, socktype, proto) # If provided, set socket level options before connecting. _set_socket_options(sock, socket_options) if timeout is not _DEFAULT_TIMEOUT: sock.settimeout(timeout) if source_address: sock.bind(source_address) > sock.connect(sa) E ConnectionRefusedError: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError The above exception was the direct cause of the following exception: self = method = 'GET' url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig' body = None headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) redirect = False, assert_same_host = False timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None release_conn = False, chunked = False, body_pos = None, preload_content = False decode_content = False, response_kw = {} parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01', query='content=nonconfig', fragment=None) destination_scheme = None, conn = None, release_this_conn = True http_tunnel_required = False, err = None, clean_exit = False def urlopen( # type: ignore[override] self, method: str, url: str, body: _TYPE_BODY | None = None, headers: typing.Mapping[str, str] | None = None, retries: Retry | bool | int | None = None, redirect: bool = True, assert_same_host: bool = True, timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, pool_timeout: int | None = None, release_conn: bool | None = None, chunked: bool = False, body_pos: _TYPE_BODY_POSITION | None = None, preload_content: bool = True, decode_content: bool = True, **response_kw: typing.Any, ) -> BaseHTTPResponse: """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param url: The URL to perform the request on. :param body: Data to send in the request body, either :class:`str`, :class:`bytes`, an iterable of :class:`str`/:class:`bytes`, or a file-like object. :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When ``False``, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param bool preload_content: If True, the response's body will be preloaded into memory. :param bool decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``preload_content`` which defaults to ``True``. :param bool chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. """ parsed_url = parse_url(url) destination_scheme = parsed_url.scheme if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = preload_content # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) # Ensure that the URL we're connecting to is properly encoded if url.startswith("/"): url = to_str(_encode_target(url)) else: url = to_str(parsed_url.url) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn http_tunnel_required = connection_requires_http_tunnel( self.proxy, self.proxy_config, destination_scheme ) # Merge the proxy headers. Only done when not using HTTP CONNECT. We # have to copy the headers dict so we can safely change it without those # changes being reflected in anyone else's copy. if not http_tunnel_required: headers = headers.copy() # type: ignore[attr-defined] headers.update(self.proxy_headers) # type: ignore[union-attr] # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] # Is this a closed/new connection that requires CONNECT tunnelling? if self.proxy is not None and http_tunnel_required and conn.is_closed: try: self._prepare_proxy(conn) except (BaseSSLError, OSError, SocketTimeout) as e: self._raise_timeout( err=e, url=self.proxy.url, timeout_value=conn.timeout ) raise # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Make the request on the HTTPConnection object > response = self._make_request( conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked, retries=retries, response_conn=response_conn, preload_content=preload_content, decode_content=decode_content, **response_kw, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request conn.request( ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request self.endheaders() /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders self._send_output(message_body, encode_chunked=encode_chunked) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output self.send(msg) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send self.connect() ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect self.sock = self._new_conn() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) except socket.gaierror as e: raise NameResolutionError(self.host, self, e) from e except SocketTimeout as e: raise ConnectTimeoutError( self, f"Connection to {self.host} timed out. (connect timeout={self.timeout})", ) from e except OSError as e: > raise NewConnectionError( self, f"Failed to establish a new connection: {e}" ) from e E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError The above exception was the direct cause of the following exception: self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: > resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen retries = retries.increment( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Retry(total=0, connect=None, read=False, redirect=None, status=None) method = 'GET' url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig' response = None error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') _pool = _stacktrace = def increment( self, method: str | None = None, url: str | None = None, response: BaseHTTPResponse | None = None, error: Exception | None = None, _pool: ConnectionPool | None = None, _stacktrace: TracebackType | None = None, ) -> Self: """Return a new Retry object with incremented retry counters. :param response: A response object, or None, if the server did not return a response. :type response: :class:`~urllib3.response.BaseHTTPResponse` :param Exception error: An error encountered during the request, or None if the response was received successfully. :return: A new ``Retry`` object. """ if self.total is False and error: # Disabled, indicate to re-raise the error. raise reraise(type(error), error, _stacktrace) total = self.total if total is not None: total -= 1 connect = self.connect read = self.read redirect = self.redirect status_count = self.status other = self.other cause = "unknown" status = None redirect_location = None if error and self._is_connection_error(error): # Connect retry? if connect is False: raise reraise(type(error), error, _stacktrace) elif connect is not None: connect -= 1 elif error and self._is_read_error(error): # Read retry? if read is False or method is None or not self._is_method_retryable(method): raise reraise(type(error), error, _stacktrace) elif read is not None: read -= 1 elif error: # Other retry? if other is not None: other -= 1 elif response and response.get_redirect_location(): # Redirect retry? if redirect is not None: redirect -= 1 cause = "too many redirects" response_redirect_location = response.get_redirect_location() if response_redirect_location: redirect_location = response_redirect_location status = response.status else: # Incrementing because of a server error like a 500 in # status_forcelist and the given method is in the allowed_methods cause = ResponseError.GENERIC_ERROR if response and response.status: if status_count is not None: status_count -= 1 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) status = response.status history = self.history + ( RequestHistory(method, url, error, status, redirect_location), ) new_retry = self.new( total=total, connect=connect, read=read, redirect=redirect, status=status_count, other=other, history=history, ) if new_retry.is_exhausted(): reason = error or ResponseError(cause) > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError During handling of the above exception, another exception occurred: self = def test_20_rdm_device_disconnected(self): > response = test_utils.check_device_connection("ROADMA01") transportpce_tests/1.2.1/test01_portmapping.py:215: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ transportpce_tests/common/test_utils.py:371: in check_device_connection response = get_request(url[RESTCONF_VERSION].format('{}', node)) transportpce_tests/common/test_utils.py:116: in get_request return requests.request( ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request return session.request(method=method, url=url, **kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request resp = self.send(prep, **send_kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send r = adapter.send(request, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) except (ProtocolError, OSError) as err: raise ConnectionError(err, request=request) except MaxRetryError as e: if isinstance(e.reason, ConnectTimeoutError): # TODO: Remove this in 3.0.0: see #2811 if not isinstance(e.reason, NewConnectionError): raise ConnectTimeout(e, request=request) if isinstance(e.reason, ResponseError): raise RetryError(e, request=request) if isinstance(e.reason, _ProxyError): raise ProxyError(e, request=request) if isinstance(e.reason, _SSLError): # This branch is for urllib3 v1.22 and later. raise SSLError(e, request=request) > raise ConnectionError(e, request=request) E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError ----------------------------- Captured stdout call ----------------------------- execution of test_20_rdm_device_disconnected _______ TransportPCEPortMappingTesting.test_21_rdm_device_not_connected ________ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: > sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection raise err _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ address = ('localhost', 8182), timeout = 10, source_address = None socket_options = [(6, 1, 1)] def create_connection( address: tuple[str, int], timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, source_address: tuple[str, int] | None = None, socket_options: _TYPE_SOCKET_OPTIONS | None = None, ) -> socket.socket: """Connect to *address* and return the socket object. Convenience function. Connect to *address* (a 2-tuple ``(host, port)``) and return the socket object. Passing the optional *timeout* parameter will set the timeout on the socket instance before attempting to connect. If no *timeout* is supplied, the global default timeout setting returned by :func:`socket.getdefaulttimeout` is used. If *source_address* is set it must be a tuple of (host, port) for the socket to bind as a source address before making the connection. An host of '' or port 0 tells the OS to use the default. """ host, port = address if host.startswith("["): host = host.strip("[]") err = None # Using the value from allowed_gai_family() in the context of getaddrinfo lets # us select whether to work with IPv4 DNS records, IPv6 records, or both. # The original create_connection function always returns all records. family = allowed_gai_family() try: host.encode("idna") except UnicodeError: raise LocationParseError(f"'{host}', label empty or too long") from None for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): af, socktype, proto, canonname, sa = res sock = None try: sock = socket.socket(af, socktype, proto) # If provided, set socket level options before connecting. _set_socket_options(sock, socket_options) if timeout is not _DEFAULT_TIMEOUT: sock.settimeout(timeout) if source_address: sock.bind(source_address) > sock.connect(sa) E ConnectionRefusedError: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError The above exception was the direct cause of the following exception: self = method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info' body = None headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) redirect = False, assert_same_host = False timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None release_conn = False, chunked = False, body_pos = None, preload_content = False decode_content = False, response_kw = {} parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info', query=None, fragment=None) destination_scheme = None, conn = None, release_this_conn = True http_tunnel_required = False, err = None, clean_exit = False def urlopen( # type: ignore[override] self, method: str, url: str, body: _TYPE_BODY | None = None, headers: typing.Mapping[str, str] | None = None, retries: Retry | bool | int | None = None, redirect: bool = True, assert_same_host: bool = True, timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, pool_timeout: int | None = None, release_conn: bool | None = None, chunked: bool = False, body_pos: _TYPE_BODY_POSITION | None = None, preload_content: bool = True, decode_content: bool = True, **response_kw: typing.Any, ) -> BaseHTTPResponse: """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param url: The URL to perform the request on. :param body: Data to send in the request body, either :class:`str`, :class:`bytes`, an iterable of :class:`str`/:class:`bytes`, or a file-like object. :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When ``False``, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param bool preload_content: If True, the response's body will be preloaded into memory. :param bool decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``preload_content`` which defaults to ``True``. :param bool chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. """ parsed_url = parse_url(url) destination_scheme = parsed_url.scheme if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = preload_content # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) # Ensure that the URL we're connecting to is properly encoded if url.startswith("/"): url = to_str(_encode_target(url)) else: url = to_str(parsed_url.url) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn http_tunnel_required = connection_requires_http_tunnel( self.proxy, self.proxy_config, destination_scheme ) # Merge the proxy headers. Only done when not using HTTP CONNECT. We # have to copy the headers dict so we can safely change it without those # changes being reflected in anyone else's copy. if not http_tunnel_required: headers = headers.copy() # type: ignore[attr-defined] headers.update(self.proxy_headers) # type: ignore[union-attr] # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] # Is this a closed/new connection that requires CONNECT tunnelling? if self.proxy is not None and http_tunnel_required and conn.is_closed: try: self._prepare_proxy(conn) except (BaseSSLError, OSError, SocketTimeout) as e: self._raise_timeout( err=e, url=self.proxy.url, timeout_value=conn.timeout ) raise # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Make the request on the HTTPConnection object > response = self._make_request( conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked, retries=retries, response_conn=response_conn, preload_content=preload_content, decode_content=decode_content, **response_kw, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request conn.request( ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request self.endheaders() /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders self._send_output(message_body, encode_chunked=encode_chunked) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output self.send(msg) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send self.connect() ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect self.sock = self._new_conn() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) except socket.gaierror as e: raise NameResolutionError(self.host, self, e) from e except SocketTimeout as e: raise ConnectTimeoutError( self, f"Connection to {self.host} timed out. (connect timeout={self.timeout})", ) from e except OSError as e: > raise NewConnectionError( self, f"Failed to establish a new connection: {e}" ) from e E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError The above exception was the direct cause of the following exception: self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: > resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen retries = retries.increment( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Retry(total=0, connect=None, read=False, redirect=None, status=None) method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info' response = None error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') _pool = _stacktrace = def increment( self, method: str | None = None, url: str | None = None, response: BaseHTTPResponse | None = None, error: Exception | None = None, _pool: ConnectionPool | None = None, _stacktrace: TracebackType | None = None, ) -> Self: """Return a new Retry object with incremented retry counters. :param response: A response object, or None, if the server did not return a response. :type response: :class:`~urllib3.response.BaseHTTPResponse` :param Exception error: An error encountered during the request, or None if the response was received successfully. :return: A new ``Retry`` object. """ if self.total is False and error: # Disabled, indicate to re-raise the error. raise reraise(type(error), error, _stacktrace) total = self.total if total is not None: total -= 1 connect = self.connect read = self.read redirect = self.redirect status_count = self.status other = self.other cause = "unknown" status = None redirect_location = None if error and self._is_connection_error(error): # Connect retry? if connect is False: raise reraise(type(error), error, _stacktrace) elif connect is not None: connect -= 1 elif error and self._is_read_error(error): # Read retry? if read is False or method is None or not self._is_method_retryable(method): raise reraise(type(error), error, _stacktrace) elif read is not None: read -= 1 elif error: # Other retry? if other is not None: other -= 1 elif response and response.get_redirect_location(): # Redirect retry? if redirect is not None: redirect -= 1 cause = "too many redirects" response_redirect_location = response.get_redirect_location() if response_redirect_location: redirect_location = response_redirect_location status = response.status else: # Incrementing because of a server error like a 500 in # status_forcelist and the given method is in the allowed_methods cause = ResponseError.GENERIC_ERROR if response and response.status: if status_count is not None: status_count -= 1 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) status = response.status history = self.history + ( RequestHistory(method, url, error, status, redirect_location), ) new_retry = self.new( total=total, connect=connect, read=read, redirect=redirect, status=status_count, other=other, history=history, ) if new_retry.is_exhausted(): reason = error or ResponseError(cause) > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError During handling of the above exception, another exception occurred: self = def test_21_rdm_device_not_connected(self): > response = test_utils.get_portmapping_node_attr("ROADMA01", "node-info", None) transportpce_tests/1.2.1/test01_portmapping.py:223: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ transportpce_tests/common/test_utils.py:473: in get_portmapping_node_attr response = get_request(target_url) transportpce_tests/common/test_utils.py:116: in get_request return requests.request( ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request return session.request(method=method, url=url, **kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request resp = self.send(prep, **send_kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send r = adapter.send(request, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) except (ProtocolError, OSError) as err: raise ConnectionError(err, request=request) except MaxRetryError as e: if isinstance(e.reason, ConnectTimeoutError): # TODO: Remove this in 3.0.0: see #2811 if not isinstance(e.reason, NewConnectionError): raise ConnectTimeout(e, request=request) if isinstance(e.reason, ResponseError): raise RetryError(e, request=request) if isinstance(e.reason, _ProxyError): raise ProxyError(e, request=request) if isinstance(e.reason, _SSLError): # This branch is for urllib3 v1.22 and later. raise SSLError(e, request=request) > raise ConnectionError(e, request=request) E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError ----------------------------- Captured stdout call ----------------------------- execution of test_21_rdm_device_not_connected --------------------------- Captured stdout teardown --------------------------- all processes killed =========================== short test summary info ============================ FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_09_xpdr_portmapping_info FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_10_xpdr_portmapping_NETWORK1 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_11_xpdr_portmapping_NETWORK2 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_12_xpdr_portmapping_CLIENT1 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_13_xpdr_portmapping_CLIENT2 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_14_xpdr_portmapping_CLIENT3 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_15_xpdr_portmapping_CLIENT4 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_16_xpdr_device_disconnection FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_17_xpdr_device_disconnected FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_18_xpdr_device_not_connected FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_19_rdm_device_disconnection FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_20_rdm_device_disconnected FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_21_rdm_device_not_connected 13 failed, 8 passed in 451.52s (0:07:31) tests121: exit 1 (451.98 seconds) /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh 1.2.1 pid=35971 ............ [100%] 12 passed in 42.73s pytest -q transportpce_tests/7.1/test02_otn_renderer.py .............................................................. [100%] 62 passed in 155.20s (0:02:35) pytest -q transportpce_tests/7.1/test03_renderer_or_modes.py ................................................ [100%] 48 passed in 133.60s (0:02:13) pytest -q transportpce_tests/7.1/test04_renderer_regen_mode.py ...................... [100%] 22 passed in 71.71s (0:01:11) tests121: FAIL ✖ in 7 minutes 40.55 seconds tests71: OK ✔ in 6 minutes 50.67 seconds tests221: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt tests221: freeze> python -m pip freeze --all tests221: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.3.1,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 tests221: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh 2.2.1 using environment variables from ./karaf221.env pytest -q transportpce_tests/2.2.1/test01_portmapping.py ................................... [100%] 35 passed in 75.14s (0:01:15) pytest -q transportpce_tests/2.2.1/test02_topo_portmapping.py ...... [100%] 6 passed in 43.65s pytest -q transportpce_tests/2.2.1/test03_topology.py ............................................ [100%] 44 passed in 135.16s (0:02:15) pytest -q transportpce_tests/2.2.1/test04_otn_topology.py ............ [100%] 12 passed in 58.89s pytest -q transportpce_tests/2.2.1/test05_flex_grid.py ................ [100%] 16 passed in 113.55s (0:01:53) pytest -q transportpce_tests/2.2.1/test06_renderer_service_path_nominal.py ............................... [100%] 31 passed in 34.47s pytest -q transportpce_tests/2.2.1/test07_otn_renderer.py .......................... [100%] 26 passed in 90.01s (0:01:30) pytest -q transportpce_tests/2.2.1/test08_otn_sh_renderer.py ...................... [100%] 22 passed in 98.40s (0:01:38) pytest -q transportpce_tests/2.2.1/test09_olm.py ........................................ [100%] 40 passed in 181.24s (0:03:01) pytest -q transportpce_tests/2.2.1/test11_otn_end2end.py ........................................................................ [ 74%] ......................... [100%] 97 passed in 489.35s (0:08:09) pytest -q transportpce_tests/2.2.1/test12_end2end.py ...................................................... [100%] 54 passed in 446.31s (0:07:26) pytest -q transportpce_tests/2.2.1/test14_otn_switch_end2end.py ........................................................................ [ 71%] ............................. [100%] 101 passed in 489.58s (0:08:09) pytest -q transportpce_tests/2.2.1/test15_otn_end2end_with_intermediate_switch.py ........................................................................ [ 67%] ................................... [100%] 107 passed in 598.99s (0:09:58) tests221: OK ✔ in 47 minutes 43.54 seconds tests_hybrid: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt tests_hybrid: freeze> python -m pip freeze --all tests_hybrid: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.3.1,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 tests_hybrid: commands[0] /w/workspace/transportpce-tox-verify-scandium/tests> ./launch_tests.sh hybrid using environment variables from ./karaf121.env pytest -q transportpce_tests/hybrid/test01_device_change_notifications.py ................................................... [100%] 51 passed in 329.48s (0:05:29) pytest -q transportpce_tests/hybrid/test02_B100G_end2end.py ........................................................................ [ 66%] ..................................... [100%] 109 passed in 427.31s (0:07:07) pytest -q transportpce_tests/hybrid/test03_autonomous_reroute.py ..................................................... [100%] 53 passed in 438.29s (0:07:18) tests_hybrid: OK ✔ in 20 minutes 1.9 seconds buildlighty: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-scandium/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-scandium/tests/test-requirements.txt buildlighty: freeze> python -m pip freeze --all buildlighty: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.4.0,cryptography==43.0.3,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.3.1,pluggy==1.5.0,psutil==6.1.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.2.0,urllib3==2.2.3,wheel==0.44.0 buildlighty: commands[0] /w/workspace/transportpce-tox-verify-scandium/lighty> ./build.sh NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED [ERROR] COMPILATION ERROR : [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[17,42] cannot find symbol symbol: class YangModuleInfo location: package org.opendaylight.yangtools.binding [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[21,30] cannot find symbol symbol: class YangModuleInfo location: class io.lighty.controllers.tpce.utils.TPCEUtils [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[343,30] cannot find symbol symbol: class YangModuleInfo location: class io.lighty.controllers.tpce.utils.TPCEUtils [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[350,23] cannot find symbol symbol: class YangModuleInfo location: class io.lighty.controllers.tpce.utils.TPCEUtils [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.13.0:compile (default-compile) on project tpce: Compilation failure: Compilation failure: [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[17,42] cannot find symbol [ERROR] symbol: class YangModuleInfo [ERROR] location: package org.opendaylight.yangtools.binding [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[21,30] cannot find symbol [ERROR] symbol: class YangModuleInfo [ERROR] location: class io.lighty.controllers.tpce.utils.TPCEUtils [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[343,30] cannot find symbol [ERROR] symbol: class YangModuleInfo [ERROR] location: class io.lighty.controllers.tpce.utils.TPCEUtils [ERROR] /w/workspace/transportpce-tox-verify-scandium/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[350,23] cannot find symbol [ERROR] symbol: class YangModuleInfo [ERROR] location: class io.lighty.controllers.tpce.utils.TPCEUtils [ERROR] -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException unzip: cannot find or open target/tpce-bin.zip, target/tpce-bin.zip.zip or target/tpce-bin.zip.ZIP. buildlighty: exit 9 (15.35 seconds) /w/workspace/transportpce-tox-verify-scandium/lighty> ./build.sh pid=59777 buildlighty: command failed but is marked ignore outcome so handling it as success buildcontroller: OK (105.47=setup[7.55]+cmd[97.93] seconds) testsPCE: OK (312.56=setup[64.83]+cmd[247.74] seconds) sims: OK (13.27=setup[6.92]+cmd[6.35] seconds) build_karaf_tests121: OK (53.97=setup[6.96]+cmd[47.01] seconds) tests121: FAIL code 1 (460.55=setup[8.58]+cmd[451.98] seconds) build_karaf_tests221: OK (52.63=setup[6.91]+cmd[45.72] seconds) tests_tapi: FAIL code 1 (697.22=setup[11.86]+cmd[685.36] seconds) tests221: OK (2863.54=setup[5.91]+cmd[2857.63] seconds) build_karaf_tests71: OK (54.08=setup[12.87]+cmd[41.21] seconds) tests71: OK (410.67=setup[6.55]+cmd[404.12] seconds) build_karaf_tests_hybrid: OK (58.78=setup[11.81]+cmd[46.97] seconds) tests_hybrid: OK (1201.90=setup[6.15]+cmd[1195.75] seconds) buildlighty: OK (21.61=setup[6.26]+cmd[15.35] seconds) docs: OK (32.64=setup[30.05]+cmd[2.59] seconds) docs-linkcheck: OK (33.50=setup[30.08]+cmd[3.43] seconds) checkbashisms: OK (3.07=setup[2.12]+cmd[0.02,0.06,0.87] seconds) pre-commit: OK (50.83=setup[3.18]+cmd[0.01,0.01,36.57,11.06] seconds) pylint: OK (28.08=setup[5.64]+cmd[22.44] seconds) evaluation failed :( (5354.47 seconds) + tox_status=255 + echo '---> Completed tox runs' ---> Completed tox runs + for i in .tox/*/log ++ echo .tox/build_karaf_tests121/log ++ awk -F/ '{print $2}' + tox_env=build_karaf_tests121 + cp -r .tox/build_karaf_tests121/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/build_karaf_tests121 + for i in .tox/*/log ++ echo .tox/build_karaf_tests221/log ++ awk -F/ '{print $2}' + tox_env=build_karaf_tests221 + cp -r .tox/build_karaf_tests221/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/build_karaf_tests221 + for i in .tox/*/log ++ echo .tox/build_karaf_tests71/log ++ awk -F/ '{print $2}' + tox_env=build_karaf_tests71 + cp -r .tox/build_karaf_tests71/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/build_karaf_tests71 + for i in .tox/*/log ++ echo .tox/build_karaf_tests_hybrid/log ++ awk -F/ '{print $2}' + tox_env=build_karaf_tests_hybrid + cp -r .tox/build_karaf_tests_hybrid/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/build_karaf_tests_hybrid + for i in .tox/*/log ++ awk -F/ '{print $2}' ++ echo .tox/buildcontroller/log + tox_env=buildcontroller + cp -r .tox/buildcontroller/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/buildcontroller + for i in .tox/*/log ++ echo .tox/buildlighty/log ++ awk -F/ '{print $2}' + tox_env=buildlighty + cp -r .tox/buildlighty/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/buildlighty + for i in .tox/*/log ++ echo .tox/checkbashisms/log ++ awk -F/ '{print $2}' + tox_env=checkbashisms + cp -r .tox/checkbashisms/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/checkbashisms + for i in .tox/*/log ++ echo .tox/docs-linkcheck/log ++ awk -F/ '{print $2}' + tox_env=docs-linkcheck + cp -r .tox/docs-linkcheck/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/docs-linkcheck + for i in .tox/*/log ++ echo .tox/docs/log ++ awk -F/ '{print $2}' + tox_env=docs + cp -r .tox/docs/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/docs + for i in .tox/*/log ++ echo .tox/pre-commit/log ++ awk -F/ '{print $2}' + tox_env=pre-commit + cp -r .tox/pre-commit/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/pre-commit + for i in .tox/*/log ++ echo .tox/pylint/log ++ awk -F/ '{print $2}' + tox_env=pylint + cp -r .tox/pylint/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/pylint + for i in .tox/*/log ++ echo .tox/sims/log ++ awk -F/ '{print $2}' + tox_env=sims + cp -r .tox/sims/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/sims + for i in .tox/*/log ++ echo .tox/tests121/log ++ awk -F/ '{print $2}' + tox_env=tests121 + cp -r .tox/tests121/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/tests121 + for i in .tox/*/log ++ echo .tox/tests221/log ++ awk -F/ '{print $2}' + tox_env=tests221 + cp -r .tox/tests221/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/tests221 + for i in .tox/*/log ++ echo .tox/tests71/log ++ awk -F/ '{print $2}' + tox_env=tests71 + cp -r .tox/tests71/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/tests71 + for i in .tox/*/log ++ echo .tox/testsPCE/log ++ awk -F/ '{print $2}' + tox_env=testsPCE + cp -r .tox/testsPCE/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/testsPCE + for i in .tox/*/log ++ echo .tox/tests_hybrid/log ++ awk -F/ '{print $2}' + tox_env=tests_hybrid + cp -r .tox/tests_hybrid/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/tests_hybrid + for i in .tox/*/log ++ echo .tox/tests_tapi/log ++ awk -F/ '{print $2}' + tox_env=tests_tapi + cp -r .tox/tests_tapi/log /w/workspace/transportpce-tox-verify-scandium/archives/tox/tests_tapi + DOC_DIR=docs/_build/html + [[ -d docs/_build/html ]] + echo '---> Archiving generated docs' ---> Archiving generated docs + mv docs/_build/html /w/workspace/transportpce-tox-verify-scandium/archives/docs + echo '---> tox-run.sh ends' ---> tox-run.sh ends + test 255 -eq 0 + exit 255 ++ '[' 1 = 1 ']' ++ '[' -x /usr/bin/clear_console ']' ++ /usr/bin/clear_console -q Build step 'Execute shell' marked build as failure $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 12442 killed; [ssh-agent] Stopped. [PostBuildScript] - [INFO] Executing post build scripts. [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins16133179137486176635.sh ---> sysstat.sh [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins1053443348426218276.sh ---> package-listing.sh ++ tr '[:upper:]' '[:lower:]' ++ facter osfamily + OS_FAMILY=debian + workspace=/w/workspace/transportpce-tox-verify-scandium + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/transportpce-tox-verify-scandium ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/transportpce-tox-verify-scandium ']' + mkdir -p /w/workspace/transportpce-tox-verify-scandium/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/transportpce-tox-verify-scandium/archives/ [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins11106167330898323799.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-scandium/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-k8Tq from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-k8Tq/bin to PATH INFO: Running in OpenStack, capturing instance metadata [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins762169925862126132.sh provisioning config files... Could not find credentials [logs] for transportpce-tox-verify-scandium #20 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/transportpce-tox-verify-scandium@tmp/config2302471932504305613tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[odl-logs-s3-cloudfront-index] Run condition [Regular expression match] enabling perform for step [Provide Configuration files] provisioning config files... copy managed file [jenkins-s3-log-ship] to file:/home/jenkins/.aws/credentials [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins8079030587549812886.sh ---> create-netrc.sh WARN: Log server credential not found. [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins13599605058977683220.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-scandium/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-k8Tq from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-k8Tq/bin to PATH [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins11503547472913633021.sh ---> sudo-logs.sh Archiving 'sudo' log.. [transportpce-tox-verify-scandium] $ /bin/bash /tmp/jenkins12015059085945295456.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-scandium/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-k8Tq from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-k8Tq/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-4 INFO: Archiving Costs [transportpce-tox-verify-scandium] $ /bin/bash -l /tmp/jenkins4217698976221125227.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-scandium/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-k8Tq from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-k8Tq/bin to PATH WARNING: Nexus logging server not set INFO: S3 path logs/releng/vex-yul-odl-jenkins-1/transportpce-tox-verify-scandium/20/ INFO: archiving logs to S3 ---> uname -a: Linux prd-ubuntu2004-docker-4c-16g-2534 5.4.0-190-generic #210-Ubuntu SMP Fri Jul 5 17:03:38 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 40 bits physical, 48 bits virtual CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 4 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 128 KiB L1i cache: 128 KiB L2 cache: 2 MiB L3 cache: 64 MiB NUMA node0 CPU(s): 0-3 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Vulnerable Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr wbnoinvd arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 4 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 7.8G 0 7.8G 0% /dev tmpfs 1.6G 1.1M 1.6G 1% /run /dev/vda1 78G 16G 62G 21% / tmpfs 7.9G 0 7.9G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup /dev/loop0 68M 68M 0 100% /snap/lxd/22753 /dev/loop2 62M 62M 0 100% /snap/core20/1405 /dev/loop1 44M 44M 0 100% /snap/snapd/15177 /dev/vda15 105M 6.1M 99M 6% /boot/efi tmpfs 1.6G 0 1.6G 0% /run/user/1001 /dev/loop3 92M 92M 0 100% /snap/lxd/29619 ---> free -m: total used free shared buff/cache available Mem: 15997 659 5562 1 9775 14998 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:d3:fb:28 brd ff:ff:ff:ff:ff:ff inet 10.30.170.187/23 brd 10.30.171.255 scope global dynamic ens3 valid_lft 80888sec preferred_lft 80888sec inet6 fe80::f816:3eff:fed3:fb28/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1458 qdisc noqueue state DOWN group default link/ether 02:42:ff:62:55:93 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 5.4.0-190-generic (prd-ubuntu2004-docker-4c-16g-2534) 10/29/24 _x86_64_ (4 CPU) 13:01:27 LINUX RESTART (4 CPU) 13:02:02 tps rtps wtps dtps bread/s bwrtn/s bdscd/s 13:03:01 291.90 88.50 203.41 0.00 2589.90 26081.62 0.00 13:04:01 224.09 35.60 188.49 0.00 2489.29 43634.58 0.00 13:05:01 108.87 7.38 101.48 0.00 322.53 58644.53 0.00 13:06:01 153.38 0.47 152.92 0.00 35.32 122810.00 0.00 13:07:01 181.00 15.01 165.99 0.00 2967.77 108969.44 0.00 13:08:02 164.86 2.13 162.73 0.00 71.71 6573.94 0.00 13:09:01 81.53 1.41 80.12 0.00 156.58 1654.91 0.00 13:10:01 82.30 0.25 82.05 0.00 29.46 1426.30 0.00 13:11:01 152.07 0.33 151.74 0.00 57.06 5598.67 0.00 13:12:01 60.77 0.02 60.76 0.00 0.27 6204.57 0.00 13:13:01 2.87 0.00 2.87 0.00 0.00 66.12 0.00 13:14:01 2.85 0.00 2.85 0.00 0.00 47.33 0.00 13:15:01 2.97 0.00 2.97 0.00 0.00 159.44 0.00 13:16:01 1.88 0.00 1.88 0.00 0.00 25.06 0.00 13:17:01 2.85 0.78 2.07 0.00 13.33 26.40 0.00 13:18:01 28.88 0.07 28.81 0.00 2.00 1606.67 0.00 13:19:01 139.94 0.00 139.94 0.00 0.00 9712.91 0.00 13:20:01 2.20 0.00 2.20 0.00 0.00 40.52 0.00 13:21:01 1.62 0.00 1.62 0.00 0.00 30.00 0.00 13:22:01 85.23 0.02 85.22 0.00 0.13 1443.47 0.00 13:23:01 1.97 0.00 1.97 0.00 0.00 43.99 0.00 13:24:01 78.37 0.00 78.37 0.00 0.00 1143.01 0.00 13:25:01 32.03 0.00 32.03 0.00 0.00 1411.76 0.00 13:26:01 77.97 0.00 77.97 0.00 0.00 2707.82 0.00 13:27:01 82.29 0.00 82.29 0.00 0.00 1205.67 0.00 13:28:01 60.64 0.00 60.64 0.00 0.00 850.12 0.00 13:29:01 24.63 0.00 24.63 0.00 0.00 490.98 0.00 13:30:01 75.17 0.00 75.17 0.00 0.00 1087.55 0.00 13:31:01 49.06 0.00 49.06 0.00 0.00 688.69 0.00 13:32:01 25.89 0.00 25.89 0.00 0.00 394.40 0.00 13:33:01 142.63 0.00 142.63 0.00 0.00 1939.01 0.00 13:34:01 16.78 0.00 16.78 0.00 0.00 271.02 0.00 13:35:01 58.06 0.00 58.06 0.00 0.00 788.94 0.00 13:36:01 16.90 0.00 16.90 0.00 0.00 287.82 0.00 13:37:01 57.27 0.00 57.27 0.00 0.00 768.67 0.00 13:38:01 2.78 0.00 2.78 0.00 0.00 50.79 0.00 13:39:01 17.10 0.00 17.10 0.00 0.00 300.35 0.00 13:40:01 61.06 0.00 61.06 0.00 0.00 817.73 0.00 13:41:01 1.90 0.00 1.90 0.00 0.00 41.99 0.00 13:42:01 1.75 0.00 1.75 0.00 0.00 28.26 0.00 13:43:01 1.67 0.00 1.67 0.00 0.00 36.93 0.00 13:44:01 2.42 0.00 2.42 0.00 0.00 52.66 0.00 13:45:01 1.62 0.00 1.62 0.00 0.00 30.00 0.00 13:46:01 2.80 0.00 2.80 0.00 0.00 46.66 0.00 13:47:01 16.01 0.00 16.01 0.00 0.00 276.62 0.00 13:48:01 62.97 0.00 62.97 0.00 0.00 853.19 0.00 13:49:01 2.43 0.00 2.43 0.00 0.00 53.86 0.00 13:50:01 2.05 0.00 2.05 0.00 0.00 49.73 0.00 13:51:01 2.05 0.00 2.05 0.00 0.00 53.86 0.00 13:52:01 2.68 0.00 2.68 0.00 0.00 51.19 0.00 13:53:01 2.27 0.00 2.27 0.00 0.00 57.46 0.00 13:54:01 1.98 0.00 1.98 0.00 0.00 57.19 0.00 13:55:01 72.45 0.00 72.45 0.00 0.00 1021.70 0.00 13:56:01 2.03 0.00 2.03 0.00 0.00 55.99 0.00 13:57:01 2.00 0.00 2.00 0.00 0.00 47.45 0.00 13:58:01 2.30 0.00 2.30 0.00 0.00 45.73 0.00 13:59:01 1.73 0.00 1.73 0.00 0.00 33.06 0.00 14:00:01 2.85 0.00 2.85 0.00 0.00 62.52 0.00 14:01:01 1.93 0.00 1.93 0.00 0.00 42.66 0.00 14:02:01 1.78 0.00 1.78 0.00 0.00 35.73 0.00 14:03:01 62.02 0.28 61.74 0.00 16.26 880.39 0.00 14:04:01 29.83 0.02 29.81 0.00 1.47 412.60 0.00 14:05:01 3.32 0.00 3.32 0.00 0.00 81.32 0.00 14:06:01 2.48 0.00 2.48 0.00 0.00 46.93 0.00 14:07:01 1.60 0.00 1.60 0.00 0.00 28.53 0.00 14:08:01 3.12 0.00 3.12 0.00 0.00 53.46 0.00 14:09:01 2.47 0.00 2.47 0.00 0.00 42.13 0.00 14:10:01 2.55 0.00 2.55 0.00 0.00 49.33 0.00 14:11:01 2.73 0.00 2.73 0.00 0.00 44.12 0.00 14:12:01 3.43 0.00 3.43 0.00 0.00 64.93 0.00 14:13:01 112.18 0.00 112.18 0.00 0.00 10120.31 0.00 14:14:01 4.38 0.00 4.38 0.00 0.00 153.71 0.00 14:15:01 2.22 0.00 2.22 0.00 0.00 44.39 0.00 14:16:01 1.98 0.00 1.98 0.00 0.00 25.33 0.00 14:17:01 1.50 0.00 1.50 0.00 0.00 18.40 0.00 14:18:01 27.23 0.00 27.23 0.00 0.00 822.66 0.00 14:19:01 58.12 0.00 58.12 0.00 0.00 849.06 0.00 14:20:01 1.53 0.00 1.53 0.00 0.00 29.73 0.00 14:21:01 1.83 0.00 1.83 0.00 0.00 34.39 0.00 14:22:01 2.27 0.00 2.27 0.00 0.00 41.86 0.00 14:23:01 1.50 0.00 1.50 0.00 0.00 29.20 0.00 14:24:01 2.65 0.00 2.65 0.00 0.00 55.32 0.00 14:25:01 14.80 0.00 14.80 0.00 0.00 251.42 0.00 14:26:01 72.39 0.00 72.39 0.00 0.00 1042.63 0.00 14:27:01 2.65 0.00 2.65 0.00 0.00 157.71 0.00 14:28:01 3.83 0.00 3.83 0.00 0.00 96.92 0.00 14:29:01 2.55 0.00 2.55 0.00 0.00 63.19 0.00 14:30:01 1.53 0.00 1.53 0.00 0.00 24.93 0.00 14:31:01 1.87 0.00 1.87 0.00 0.00 21.59 0.00 14:32:01 2.22 0.00 2.22 0.00 0.00 28.13 0.00 14:33:01 39.40 4.92 34.49 0.00 245.52 3494.97 0.00 Average: 36.39 1.71 34.68 0.00 98.43 4745.82 0.00 13:02:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 13:03:01 13352396 15422940 557416 3.40 71228 2195752 1263900 7.25 816740 1937100 172312 13:04:01 10996716 14659840 1291472 7.88 130676 3612024 2328788 13.36 1735428 3225900 974252 13:05:01 9117916 13706664 2251452 13.74 152336 4469216 2825976 16.21 2929384 3867672 678852 13:06:01 6591508 14452628 1503420 9.18 196984 7552964 2333060 13.39 3080452 6098796 1158932 13:07:01 3543052 12781316 3165012 19.32 221176 8850476 4695496 26.94 5290488 6862680 113664 13:08:02 165520 9151544 6792140 41.46 224884 8601048 8004128 45.92 8876504 6645436 1396 13:09:01 674612 9375220 6568424 40.10 227364 8317804 8118608 46.58 8650024 6367756 68 13:10:01 5864756 14432996 1513688 9.24 229236 8185120 2367816 13.58 3600992 6247392 3860 13:11:01 2116964 10930936 5013888 30.61 239916 8409688 6416576 36.81 7152812 6422700 161000 13:12:01 456684 9274072 6669756 40.72 242184 8410696 7593372 43.57 8807836 6419592 532 13:13:01 430928 9248656 6695092 40.87 242220 8410992 7593372 43.57 8833288 6419788 152 13:14:01 416872 9234804 6708940 40.95 242240 8411164 7609368 43.66 8846748 6419880 140 13:15:01 313856 9132220 6811560 41.58 242276 8411464 7643068 43.85 8947120 6420164 136 13:16:01 314088 9132464 6811228 41.58 242312 8411464 7643068 43.85 8946456 6420164 76 13:17:01 311736 9130636 6812764 41.59 242404 8411788 7663904 43.97 8947800 6420152 312 13:18:01 5441372 14505768 1440420 8.79 248700 8637564 2803572 16.08 3660888 6587580 221456 13:19:01 4777716 13846904 2099324 12.82 252160 8638748 2919856 16.75 4351372 6559164 108 13:20:01 4755288 13824636 2121516 12.95 252176 8638884 2919856 16.75 4374228 6559012 276 13:21:01 4743588 13813080 2133112 13.02 252188 8639016 2951848 16.94 4383808 6559116 156 13:22:01 4003108 13074320 2871504 17.53 253744 8639184 3657120 20.98 5134568 6548436 124 13:23:01 3961080 13032548 2913344 17.78 253760 8639428 3673112 21.07 5175140 6548668 68 13:24:01 4816544 13890464 2055784 12.55 256004 8639624 2966108 17.02 4324212 6547860 392 13:25:01 3873296 13016192 2929644 17.88 258364 8699868 4219644 24.21 5198620 6607372 45832 13:26:01 5472928 14617348 1329296 8.11 259600 8700100 2754592 15.80 3609808 6602492 532 13:27:01 3797676 12944048 3001560 18.32 260856 8700660 4292228 24.63 5286132 6593664 244 13:28:01 2535444 11683100 4261888 26.02 261856 8700916 5182044 29.73 6547948 6593920 336 13:29:01 5450600 14598704 1347616 8.23 262136 8701032 2715968 15.58 3639968 6594032 512 13:30:01 5257960 14407656 1538512 9.39 263328 8701404 2789732 16.01 3832604 6593676 468 13:31:01 4664548 13814840 2131040 13.01 263816 8701500 2953072 16.94 4423500 6593756 72 13:32:01 4577692 13728432 2217408 13.54 263968 8701776 3476868 19.95 4510592 6593940 648 13:33:01 4450860 13603792 2342020 14.30 265528 8702296 3142936 18.03 4636656 6594168 96 13:34:01 5050876 14204112 1742180 10.64 265548 8702572 2863752 16.43 4037884 6594356 404 13:35:01 3840352 12994616 2950880 18.01 266304 8702824 3733800 21.42 5244052 6594604 56 13:36:01 3121580 12276136 3669004 22.40 266320 8703096 4902420 28.13 5959816 6594752 304 13:37:01 2455760 11611344 4333264 26.45 266884 8703544 5187056 29.76 6624736 6595184 184 13:38:01 2442244 11598060 4346468 26.53 266888 8703760 5187056 29.76 6636824 6595396 220 13:39:01 3034420 12190512 3754620 22.92 266904 8703960 4953296 28.42 6049096 6595568 184 13:40:01 2344784 11501912 4442644 27.12 267344 8704544 5245528 30.09 6734028 6596148 356 13:41:01 2329728 11487036 4457492 27.21 267348 8704720 5245528 30.09 6749056 6596328 112 13:42:01 2302764 11460176 4484344 27.37 267348 8704824 5261520 30.19 6776128 6596428 396 13:43:01 2304944 11462616 4481796 27.36 267360 8705084 5277508 30.28 6773616 6596676 212 13:44:01 2286508 11444580 4499796 27.47 267372 8705460 5293888 30.37 6790468 6597060 296 13:45:01 2260332 11418520 4525828 27.63 267380 8705564 5326580 30.56 6817128 6597164 320 13:46:01 2233928 11392292 4552056 27.79 267396 8705724 5342580 30.65 6842676 6597324 312 13:47:01 4112212 13270840 2674600 16.33 267420 8705908 4116708 23.62 4970376 6597508 660 13:48:01 2469232 11628752 4315660 26.34 267800 8706408 5146212 29.53 6606344 6597996 132 13:49:01 2433424 11593484 4350968 26.56 267804 8706952 5162212 29.62 6643520 6598532 336 13:50:01 2421108 11581808 4362612 26.63 267804 8707580 5194216 29.80 6654580 6599172 704 13:51:01 2386040 11547012 4397400 26.84 267812 8707848 5194216 29.80 6688208 6599436 340 13:52:01 2372432 11533948 4410460 26.92 267824 8708368 5194216 29.80 6700004 6599956 540 13:53:01 2354288 11516464 4427868 27.03 267840 8709016 5194216 29.80 6718064 6600576 476 13:54:01 4753968 13916584 2028756 12.38 267844 8709428 2870636 16.47 4326200 6601020 312 13:55:01 1527920 10690548 5253580 32.07 268176 8709084 6414892 36.80 7544232 6600660 288 13:56:01 1124680 10287668 5656148 34.53 268180 8709440 6562484 37.65 7943316 6601012 288 13:57:01 897680 10061288 5882252 35.91 268188 8710052 6692632 38.40 8167204 6601620 612 13:58:01 890200 10054004 5889640 35.95 268192 8710244 6724656 38.58 8175688 6601812 244 13:59:01 874584 10038560 5905048 36.05 268200 8710404 6740656 38.67 8189672 6601972 472 14:00:01 851100 10015540 5927972 36.19 268208 8710848 6740656 38.67 8212560 6602416 184 14:01:01 816172 9981040 5962496 36.40 268220 8711260 6740656 38.67 8246792 6602820 156 14:02:01 790256 9955268 5988264 36.56 268232 8711380 6772660 38.86 8272512 6602948 288 14:03:01 1941076 11106808 4835772 29.52 268712 8711268 6384880 36.63 7127228 6602568 340 14:04:01 1083660 10250280 5693440 34.76 268932 8712052 6614184 37.95 7978544 6602644 144 14:05:01 787088 9954596 5988700 36.56 268936 8712900 6777644 38.89 8275964 6603492 224 14:06:01 770236 9937956 6005280 36.66 268944 8713108 6793644 38.98 8293676 6603696 196 14:07:01 762360 9930208 6013108 36.71 268956 8713212 6793644 38.98 8300944 6603808 260 14:08:01 743012 9911052 6032316 36.82 268960 8713396 6793644 38.98 8319564 6603988 408 14:09:01 731600 9899868 6043460 36.89 268968 8713612 6793644 38.98 8330364 6604204 504 14:10:01 726104 9894512 6048804 36.92 268968 8713744 6793644 38.98 8335988 6604336 192 14:11:01 698784 9867372 6075892 37.09 268976 8713920 6827188 39.17 8362500 6604508 244 14:12:01 663140 9832120 6111000 37.30 268980 8714288 6876064 39.45 8396836 6604880 216 14:13:01 2644216 12058796 3885392 23.72 275304 8939728 4976944 28.55 6243228 6775652 2072 14:14:01 2148188 11564328 4379500 26.73 275332 8941140 5209604 29.89 6737464 6774184 372 14:15:01 2050528 11466896 4476808 27.33 275336 8941368 5258656 30.17 6833908 6774328 256 14:16:01 2050096 11466468 4477236 27.33 275336 8941372 5258656 30.17 6833384 6774332 96 14:17:01 2049916 11466296 4477492 27.33 275336 8941376 5258656 30.17 6833824 6774336 276 14:18:01 4915552 14332136 1613200 9.85 275420 8941532 2735180 15.69 3996660 6756104 704 14:19:01 2082992 11500100 4443588 27.13 275716 8941736 5198824 29.83 6823348 6752848 312 14:20:01 2060156 11477436 4466220 27.26 275724 8941900 5230808 30.01 6846492 6752856 308 14:21:01 2039492 11457048 4486584 27.39 275724 8942176 5246812 30.10 6866280 6753124 228 14:22:01 2036444 11454100 4489628 27.41 275724 8942268 5246812 30.10 6868884 6753220 148 14:23:01 2016944 11434964 4508660 27.52 275736 8942624 5262804 30.19 6886816 6753576 592 14:24:01 1992256 11410608 4533012 27.67 275736 8942952 5262804 30.19 6911236 6753880 60 14:25:01 5519592 14938332 1007400 6.15 275740 8943244 1826972 10.48 3408052 6743108 392 14:26:01 1368916 10787828 5155592 31.47 275996 8943200 5939008 34.07 7545208 6742976 140 14:27:01 1314860 10734408 5208880 31.80 276000 8943756 5973068 34.27 7597804 6743384 480 14:28:01 1272664 10693016 5250136 32.05 276012 8944540 6005104 34.45 7638392 6744164 308 14:29:01 1191276 10612296 5330980 32.54 276028 8945180 6021124 34.54 7718720 6744804 112 14:30:01 1189480 10610544 5332724 32.55 276028 8945220 6021124 34.54 7720252 6744832 276 14:31:01 1189340 10610404 5332828 32.55 276028 8945220 6021124 34.54 7719712 6744832 56 14:32:01 1189120 10610192 5333028 32.56 276032 8945224 6021124 34.54 7719904 6744840 64 14:33:01 5773684 15408176 537776 3.28 281140 9138360 1269284 7.28 2971060 6911612 133748 Average: 2761908 11734754 4210426 25.70 258445 8539222 5115320 29.35 6429551 6493182 40551 13:02:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 13:03:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:03:01 ens3 177.25 139.44 1281.59 36.13 0.00 0.00 0.00 0.00 13:03:01 lo 0.95 0.95 0.09 0.09 0.00 0.00 0.00 0.00 13:04:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:04:01 ens3 428.54 329.70 6489.53 36.87 0.00 0.00 0.00 0.00 13:04:01 lo 6.26 6.26 0.63 0.63 0.00 0.00 0.00 0.00 13:05:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:05:01 ens3 332.13 268.35 4674.12 26.92 0.00 0.00 0.00 0.00 13:05:01 lo 0.47 0.47 0.05 0.05 0.00 0.00 0.00 0.00 13:06:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:06:01 ens3 185.89 106.83 3961.22 12.55 0.00 0.00 0.00 0.00 13:06:01 lo 1.33 1.33 0.13 0.13 0.00 0.00 0.00 0.00 13:07:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:07:01 ens3 79.44 26.78 1394.06 2.60 0.00 0.00 0.00 0.00 13:07:01 lo 4.22 4.22 0.44 0.44 0.00 0.00 0.00 0.00 13:08:02 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:08:02 ens3 7.40 3.47 5.96 0.58 0.00 0.00 0.00 0.00 13:08:02 lo 27.69 27.69 29.54 29.54 0.00 0.00 0.00 0.00 13:09:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:09:01 ens3 1.37 1.22 0.25 0.23 0.00 0.00 0.00 0.00 13:09:01 lo 32.64 32.64 25.32 25.32 0.00 0.00 0.00 0.00 13:10:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:10:01 ens3 1.20 1.08 0.21 0.20 0.00 0.00 0.00 0.00 13:10:01 lo 37.34 37.34 17.34 17.34 0.00 0.00 0.00 0.00 13:11:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:11:01 ens3 1.92 2.17 0.80 0.72 0.00 0.00 0.00 0.00 13:11:01 lo 5.07 5.07 1.24 1.24 0.00 0.00 0.00 0.00 13:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:12:01 ens3 0.87 0.72 0.20 0.18 0.00 0.00 0.00 0.00 13:12:01 lo 35.81 35.81 31.49 31.49 0.00 0.00 0.00 0.00 13:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:13:01 ens3 0.68 0.22 0.10 0.04 0.00 0.00 0.00 0.00 13:13:01 lo 11.78 11.78 5.86 5.86 0.00 0.00 0.00 0.00 13:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:14:01 ens3 0.97 0.78 0.36 0.29 0.00 0.00 0.00 0.00 13:14:01 lo 15.15 15.15 5.66 5.66 0.00 0.00 0.00 0.00 13:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:15:01 ens3 0.85 0.42 0.14 0.09 0.00 0.00 0.00 0.00 13:15:01 lo 21.18 21.18 7.45 7.45 0.00 0.00 0.00 0.00 13:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:16:01 ens3 1.37 0.48 0.61 0.43 0.00 0.00 0.00 0.00 13:16:01 lo 0.50 0.50 0.04 0.04 0.00 0.00 0.00 0.00 13:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:17:01 ens3 1.75 0.47 0.51 0.28 0.00 0.00 0.00 0.00 13:17:01 lo 0.32 0.32 0.03 0.03 0.00 0.00 0.00 0.00 13:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:18:01 ens3 15.31 14.18 3.88 9.51 0.00 0.00 0.00 0.00 13:18:01 lo 2.35 2.35 0.33 0.33 0.00 0.00 0.00 0.00 13:19:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:19:01 ens3 0.97 0.90 0.17 0.16 0.00 0.00 0.00 0.00 13:19:01 lo 15.26 15.26 20.40 20.40 0.00 0.00 0.00 0.00 13:20:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:20:01 ens3 2.10 1.02 0.32 0.20 0.00 0.00 0.00 0.00 13:20:01 lo 24.94 24.94 8.26 8.26 0.00 0.00 0.00 0.00 13:21:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:21:01 ens3 2.35 1.48 0.83 0.64 0.00 0.00 0.00 0.00 13:21:01 lo 23.53 23.53 7.45 7.45 0.00 0.00 0.00 0.00 13:22:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:22:01 ens3 1.15 1.10 0.23 0.23 0.00 0.00 0.00 0.00 13:22:01 lo 17.67 17.67 11.21 11.21 0.00 0.00 0.00 0.00 13:23:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:23:01 ens3 1.63 1.47 0.29 0.24 0.00 0.00 0.00 0.00 13:23:01 lo 42.00 42.00 13.99 13.99 0.00 0.00 0.00 0.00 13:24:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:24:01 ens3 1.88 1.45 0.50 0.37 0.00 0.00 0.00 0.00 13:24:01 lo 25.88 25.88 11.37 11.37 0.00 0.00 0.00 0.00 13:25:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:25:01 ens3 2.27 2.70 0.97 0.87 0.00 0.00 0.00 0.00 13:25:01 lo 4.12 4.12 0.56 0.56 0.00 0.00 0.00 0.00 13:26:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:26:01 ens3 1.45 2.00 0.30 0.32 0.00 0.00 0.00 0.00 13:26:01 lo 15.78 15.78 10.15 10.15 0.00 0.00 0.00 0.00 13:27:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:27:01 ens3 0.97 0.95 0.19 0.18 0.00 0.00 0.00 0.00 13:27:01 lo 13.40 13.40 4.65 4.65 0.00 0.00 0.00 0.00 13:28:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:28:01 ens3 1.08 0.93 0.19 0.18 0.00 0.00 0.00 0.00 13:28:01 lo 20.86 20.86 10.10 10.10 0.00 0.00 0.00 0.00 13:29:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:29:01 ens3 1.10 1.35 0.21 0.22 0.00 0.00 0.00 0.00 13:29:01 lo 10.96 10.96 3.96 3.96 0.00 0.00 0.00 0.00 13:30:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:30:01 ens3 1.38 1.38 0.41 0.36 0.00 0.00 0.00 0.00 13:30:01 lo 5.75 5.75 6.29 6.29 0.00 0.00 0.00 0.00 13:31:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:31:01 ens3 0.60 0.52 0.09 0.09 0.00 0.00 0.00 0.00 13:31:01 lo 7.93 7.93 3.08 3.08 0.00 0.00 0.00 0.00 13:32:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:32:01 ens3 1.50 1.07 0.31 0.19 0.00 0.00 0.00 0.00 13:32:01 lo 3.83 3.83 0.58 0.58 0.00 0.00 0.00 0.00 13:33:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:33:01 ens3 1.18 1.20 0.43 0.38 0.00 0.00 0.00 0.00 13:33:01 lo 32.49 32.49 16.31 16.31 0.00 0.00 0.00 0.00 13:34:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:34:01 ens3 1.12 1.38 0.20 0.21 0.00 0.00 0.00 0.00 13:34:01 lo 18.31 18.31 7.03 7.03 0.00 0.00 0.00 0.00 13:35:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:35:01 ens3 0.77 0.73 0.16 0.10 0.00 0.00 0.00 0.00 13:35:01 lo 19.93 19.93 19.55 19.55 0.00 0.00 0.00 0.00 13:36:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:36:01 ens3 0.98 1.08 0.15 0.15 0.00 0.00 0.00 0.00 13:36:01 lo 4.28 4.28 0.81 0.81 0.00 0.00 0.00 0.00 13:37:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:37:01 ens3 0.95 1.02 0.22 0.21 0.00 0.00 0.00 0.00 13:37:01 lo 45.13 45.13 17.97 17.97 0.00 0.00 0.00 0.00 13:38:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:38:01 ens3 0.82 0.62 0.14 0.12 0.00 0.00 0.00 0.00 13:38:01 lo 35.08 35.08 9.96 9.96 0.00 0.00 0.00 0.00 13:39:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:39:01 ens3 0.77 0.65 0.10 0.10 0.00 0.00 0.00 0.00 13:39:01 lo 13.86 13.86 4.00 4.00 0.00 0.00 0.00 0.00 13:40:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:40:01 ens3 0.88 0.62 0.14 0.11 0.00 0.00 0.00 0.00 13:40:01 lo 31.59 31.59 22.59 22.59 0.00 0.00 0.00 0.00 13:41:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:41:01 ens3 0.65 0.47 0.16 0.09 0.00 0.00 0.00 0.00 13:41:01 lo 10.78 10.78 6.85 6.85 0.00 0.00 0.00 0.00 13:42:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:42:01 ens3 1.20 0.75 0.26 0.19 0.00 0.00 0.00 0.00 13:42:01 lo 14.86 14.86 6.67 6.67 0.00 0.00 0.00 0.00 13:43:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:43:01 ens3 1.18 0.62 0.42 0.30 0.00 0.00 0.00 0.00 13:43:01 lo 22.20 22.20 9.12 9.12 0.00 0.00 0.00 0.00 13:44:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:44:01 ens3 1.43 0.67 0.41 0.28 0.00 0.00 0.00 0.00 13:44:01 lo 17.83 17.83 8.06 8.06 0.00 0.00 0.00 0.00 13:45:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:45:01 ens3 1.15 0.80 0.42 0.34 0.00 0.00 0.00 0.00 13:45:01 lo 8.60 8.60 4.21 4.21 0.00 0.00 0.00 0.00 13:46:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:46:01 ens3 0.88 0.38 0.12 0.07 0.00 0.00 0.00 0.00 13:46:01 lo 18.16 18.16 7.70 7.70 0.00 0.00 0.00 0.00 13:47:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:47:01 ens3 1.25 0.95 0.44 0.37 0.00 0.00 0.00 0.00 13:47:01 lo 19.55 19.55 6.49 6.49 0.00 0.00 0.00 0.00 13:48:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:48:01 ens3 1.08 1.18 0.27 0.17 0.00 0.00 0.00 0.00 13:48:01 lo 36.44 36.44 13.52 13.52 0.00 0.00 0.00 0.00 13:49:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:49:01 ens3 0.48 0.48 0.08 0.07 0.00 0.00 0.00 0.00 13:49:01 lo 40.28 40.28 12.56 12.56 0.00 0.00 0.00 0.00 13:50:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:50:01 ens3 0.93 0.60 0.13 0.09 0.00 0.00 0.00 0.00 13:50:01 lo 40.06 40.06 12.61 12.61 0.00 0.00 0.00 0.00 13:51:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:51:01 ens3 0.65 0.62 0.30 0.25 0.00 0.00 0.00 0.00 13:51:01 lo 23.68 23.68 6.50 6.50 0.00 0.00 0.00 0.00 13:52:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:52:01 ens3 0.53 0.43 0.12 0.11 0.00 0.00 0.00 0.00 13:52:01 lo 29.58 29.58 9.12 9.12 0.00 0.00 0.00 0.00 13:53:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:53:01 ens3 0.32 0.10 0.03 0.01 0.00 0.00 0.00 0.00 13:53:01 lo 53.82 53.82 16.01 16.01 0.00 0.00 0.00 0.00 13:54:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:54:01 ens3 0.48 0.35 0.06 0.05 0.00 0.00 0.00 0.00 13:54:01 lo 41.74 41.74 12.77 12.77 0.00 0.00 0.00 0.00 13:55:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:55:01 ens3 1.20 0.77 0.14 0.10 0.00 0.00 0.00 0.00 13:55:01 lo 8.05 8.05 15.86 15.86 0.00 0.00 0.00 0.00 13:56:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:56:01 ens3 1.17 1.12 0.41 0.38 0.00 0.00 0.00 0.00 13:56:01 lo 26.50 26.50 9.76 9.76 0.00 0.00 0.00 0.00 13:57:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:57:01 ens3 0.72 0.55 0.18 0.16 0.00 0.00 0.00 0.00 13:57:01 lo 35.70 35.70 16.91 16.91 0.00 0.00 0.00 0.00 13:58:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:58:01 ens3 0.62 0.45 0.10 0.08 0.00 0.00 0.00 0.00 13:58:01 lo 12.43 12.43 5.13 5.13 0.00 0.00 0.00 0.00 13:59:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:59:01 ens3 0.72 0.60 0.14 0.12 0.00 0.00 0.00 0.00 13:59:01 lo 23.36 23.36 9.64 9.64 0.00 0.00 0.00 0.00 14:00:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:00:01 ens3 0.55 0.38 0.08 0.07 0.00 0.00 0.00 0.00 14:00:01 lo 32.98 32.98 10.09 10.09 0.00 0.00 0.00 0.00 14:01:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:01:01 ens3 0.60 0.48 0.11 0.09 0.00 0.00 0.00 0.00 14:01:01 lo 21.90 21.90 10.41 10.41 0.00 0.00 0.00 0.00 14:02:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:02:01 ens3 0.83 0.62 0.29 0.16 0.00 0.00 0.00 0.00 14:02:01 lo 19.16 19.16 8.03 8.03 0.00 0.00 0.00 0.00 14:03:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:03:01 ens3 0.87 0.87 0.11 0.11 0.00 0.00 0.00 0.00 14:03:01 lo 2.72 2.72 0.27 0.27 0.00 0.00 0.00 0.00 14:04:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:04:01 ens3 0.98 0.98 0.22 0.20 0.00 0.00 0.00 0.00 14:04:01 lo 23.93 23.93 22.73 22.73 0.00 0.00 0.00 0.00 14:05:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:05:01 ens3 0.40 0.32 0.07 0.06 0.00 0.00 0.00 0.00 14:05:01 lo 45.69 45.69 20.37 20.37 0.00 0.00 0.00 0.00 14:06:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:06:01 ens3 0.72 0.63 0.12 0.12 0.00 0.00 0.00 0.00 14:06:01 lo 12.83 12.83 8.55 8.55 0.00 0.00 0.00 0.00 14:07:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:07:01 ens3 0.63 0.75 0.17 0.17 0.00 0.00 0.00 0.00 14:07:01 lo 14.51 14.51 6.42 6.42 0.00 0.00 0.00 0.00 14:08:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:08:01 ens3 0.92 0.98 0.15 0.16 0.00 0.00 0.00 0.00 14:08:01 lo 16.80 16.80 10.51 10.51 0.00 0.00 0.00 0.00 14:09:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:09:01 ens3 0.43 0.43 0.07 0.07 0.00 0.00 0.00 0.00 14:09:01 lo 11.10 11.10 5.93 5.93 0.00 0.00 0.00 0.00 14:10:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:10:01 ens3 0.67 0.67 0.10 0.10 0.00 0.00 0.00 0.00 14:10:01 lo 19.78 19.78 9.79 9.79 0.00 0.00 0.00 0.00 14:11:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:11:01 ens3 0.33 0.38 0.06 0.06 0.00 0.00 0.00 0.00 14:11:01 lo 16.91 16.91 8.60 8.60 0.00 0.00 0.00 0.00 14:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:12:01 ens3 0.70 0.78 0.17 0.17 0.00 0.00 0.00 0.00 14:12:01 lo 30.38 30.38 10.78 10.78 0.00 0.00 0.00 0.00 14:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:13:01 ens3 1.87 2.22 0.80 0.73 0.00 0.00 0.00 0.00 14:13:01 lo 8.23 8.23 10.14 10.14 0.00 0.00 0.00 0.00 14:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:14:01 ens3 1.58 1.73 0.25 6.83 0.00 0.00 0.00 0.00 14:14:01 lo 42.61 42.61 29.56 29.56 0.00 0.00 0.00 0.00 14:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:15:01 ens3 0.60 0.48 0.11 0.10 0.00 0.00 0.00 0.00 14:15:01 lo 23.95 23.95 8.08 8.08 0.00 0.00 0.00 0.00 14:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:16:01 ens3 0.20 0.10 0.02 0.01 0.00 0.00 0.00 0.00 14:16:01 lo 0.33 0.33 0.02 0.02 0.00 0.00 0.00 0.00 14:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:17:01 ens3 2.55 3.87 0.23 4.79 0.00 0.00 0.00 0.00 14:17:01 lo 0.22 0.22 0.02 0.02 0.00 0.00 0.00 0.00 14:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:18:01 ens3 0.47 0.37 0.06 0.05 0.00 0.00 0.00 0.00 14:18:01 lo 2.07 2.07 0.19 0.19 0.00 0.00 0.00 0.00 14:19:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:19:01 ens3 0.65 0.58 0.11 0.10 0.00 0.00 0.00 0.00 14:19:01 lo 32.91 32.91 17.51 17.51 0.00 0.00 0.00 0.00 14:20:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:20:01 ens3 0.77 0.72 0.15 0.14 0.00 0.00 0.00 0.00 14:20:01 lo 11.51 11.51 4.74 4.74 0.00 0.00 0.00 0.00 14:21:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:21:01 ens3 0.68 0.60 0.13 0.12 0.00 0.00 0.00 0.00 14:21:01 lo 27.88 27.88 10.55 10.55 0.00 0.00 0.00 0.00 14:22:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:22:01 ens3 1.07 1.13 0.25 0.24 0.00 0.00 0.00 0.00 14:22:01 lo 8.57 8.57 3.98 3.98 0.00 0.00 0.00 0.00 14:23:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:23:01 ens3 0.37 0.38 0.06 0.06 0.00 0.00 0.00 0.00 14:23:01 lo 36.58 36.58 12.15 12.15 0.00 0.00 0.00 0.00 14:24:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:24:01 ens3 0.75 0.88 0.39 0.12 0.00 0.00 0.00 0.00 14:24:01 lo 20.93 20.93 7.87 7.87 0.00 0.00 0.00 0.00 14:25:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:25:01 ens3 1.03 1.35 0.20 0.22 0.00 0.00 0.00 0.00 14:25:01 lo 38.76 38.76 12.25 12.25 0.00 0.00 0.00 0.00 14:26:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:26:01 ens3 0.90 1.00 0.13 0.13 0.00 0.00 0.00 0.00 14:26:01 lo 38.69 38.69 20.11 20.11 0.00 0.00 0.00 0.00 14:27:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:27:01 ens3 1.13 1.35 0.27 0.28 0.00 0.00 0.00 0.00 14:27:01 lo 42.26 42.26 15.21 15.21 0.00 0.00 0.00 0.00 14:28:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:28:01 ens3 0.25 0.23 0.03 0.03 0.00 0.00 0.00 0.00 14:28:01 lo 60.06 60.06 20.61 20.61 0.00 0.00 0.00 0.00 14:29:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:29:01 ens3 0.50 0.58 0.09 0.09 0.00 0.00 0.00 0.00 14:29:01 lo 73.12 73.12 23.79 23.79 0.00 0.00 0.00 0.00 14:30:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:30:01 ens3 0.72 0.82 0.31 0.27 0.00 0.00 0.00 0.00 14:30:01 lo 2.87 2.87 1.03 1.03 0.00 0.00 0.00 0.00 14:31:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:31:01 ens3 0.25 0.13 0.07 0.01 0.00 0.00 0.00 0.00 14:31:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:32:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:32:01 ens3 0.90 0.23 0.15 0.08 0.00 0.00 0.00 0.00 14:32:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 14:33:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:33:01 ens3 162.36 126.24 1961.21 19.23 0.00 0.00 0.00 0.00 14:33:01 lo 0.92 0.92 0.09 0.09 0.00 0.00 0.00 0.00 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: ens3 16.13 11.93 217.35 1.88 0.00 0.00 0.00 0.00 Average: lo 20.45 20.45 9.40 9.40 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 5.4.0-190-generic (prd-ubuntu2004-docker-4c-16g-2534) 10/29/24 _x86_64_ (4 CPU) 13:01:27 LINUX RESTART (4 CPU) 13:02:02 CPU %user %nice %system %iowait %steal %idle 13:03:01 all 19.72 12.70 10.00 3.74 0.09 53.75 13:03:01 0 19.58 13.13 12.06 3.32 0.08 51.83 13:03:01 1 35.97 10.28 10.16 3.97 0.10 39.52 13:03:01 2 5.76 15.37 9.42 2.33 0.10 67.02 13:03:01 3 17.49 12.03 8.34 5.33 0.09 56.72 13:04:01 all 61.98 0.00 4.52 4.80 0.09 28.61 13:04:01 0 63.49 0.00 4.36 3.88 0.08 28.19 13:04:01 1 63.31 0.00 4.65 4.37 0.10 27.56 13:04:01 2 63.23 0.00 5.05 6.41 0.10 25.21 13:04:01 3 57.90 0.00 4.01 4.54 0.08 33.47 13:05:01 all 59.65 0.00 2.42 1.74 0.13 36.06 13:05:01 0 53.78 0.00 2.65 1.61 0.12 41.85 13:05:01 1 49.37 0.00 2.37 1.08 0.14 47.05 13:05:01 2 73.76 0.00 2.74 2.31 0.14 21.05 13:05:01 3 61.73 0.00 1.93 1.95 0.14 34.25 13:06:01 all 79.00 0.00 4.39 5.64 0.12 10.84 13:06:01 0 79.20 0.00 5.99 7.11 0.14 7.56 13:06:01 1 81.06 0.00 3.88 3.62 0.10 11.34 13:06:01 2 79.41 0.00 4.17 6.98 0.14 9.30 13:06:01 3 76.36 0.00 3.53 4.85 0.12 15.13 13:07:01 all 85.26 0.00 3.75 6.97 0.15 3.87 13:07:01 0 82.54 0.00 3.17 10.29 0.15 3.85 13:07:01 1 86.34 0.00 3.95 4.46 0.15 5.10 13:07:01 2 83.92 0.00 3.70 8.76 0.15 3.47 13:07:01 3 88.24 0.00 4.19 4.39 0.14 3.05 13:08:02 all 68.60 0.00 2.47 0.48 0.13 28.32 13:08:02 0 67.19 0.00 2.19 0.07 0.13 30.42 13:08:02 1 67.41 0.00 2.32 0.15 0.13 29.98 13:08:02 2 71.70 0.00 3.00 1.54 0.12 23.63 13:08:02 3 68.13 0.00 2.36 0.15 0.12 29.24 13:09:01 all 34.34 0.00 1.27 0.34 0.10 63.95 13:09:01 0 35.29 0.00 1.34 0.51 0.08 62.77 13:09:01 1 32.02 0.00 0.98 0.02 0.08 66.90 13:09:01 2 35.96 0.00 1.28 0.19 0.12 62.46 13:09:01 3 34.13 0.00 1.49 0.63 0.10 63.65 13:10:01 all 42.48 0.00 1.53 0.41 0.10 55.49 13:10:01 0 42.07 0.00 1.73 0.27 0.10 55.83 13:10:01 1 41.79 0.00 1.65 0.08 0.12 56.36 13:10:01 2 43.42 0.00 1.36 0.15 0.08 54.98 13:10:01 3 42.62 0.00 1.36 1.15 0.10 54.77 13:11:01 all 86.53 0.00 2.74 0.25 0.12 10.36 13:11:01 0 83.75 0.00 2.69 0.12 0.12 13.33 13:11:01 1 87.46 0.00 2.85 0.40 0.12 9.17 13:11:01 2 86.74 0.00 2.56 0.23 0.12 10.34 13:11:01 3 88.18 0.00 2.87 0.25 0.13 8.57 13:12:01 all 26.93 0.00 0.99 0.13 0.10 71.85 13:12:01 0 26.62 0.00 0.90 0.30 0.10 72.08 13:12:01 1 27.47 0.00 1.12 0.08 0.12 71.21 13:12:01 2 26.07 0.00 1.09 0.07 0.12 72.66 13:12:01 3 27.55 0.00 0.84 0.08 0.08 71.45 13:13:01 all 4.39 0.00 0.39 0.01 0.09 95.12 13:13:01 0 4.04 0.00 0.28 0.05 0.08 95.54 13:13:01 1 4.98 0.00 0.46 0.00 0.08 94.48 13:13:01 2 4.16 0.00 0.40 0.00 0.08 95.36 13:13:01 3 4.38 0.00 0.42 0.00 0.10 95.11 13:13:01 CPU %user %nice %system %iowait %steal %idle 13:14:01 all 2.66 0.00 0.48 0.01 0.10 96.76 13:14:01 0 2.89 0.00 0.52 0.05 0.10 96.45 13:14:01 1 2.50 0.00 0.42 0.00 0.08 97.00 13:14:01 2 2.77 0.00 0.52 0.00 0.12 96.60 13:14:01 3 2.48 0.00 0.45 0.00 0.08 96.98 13:15:01 all 4.14 0.00 0.43 0.03 0.09 95.31 13:15:01 0 3.54 0.00 0.42 0.13 0.08 95.83 13:15:01 1 4.59 0.00 0.45 0.00 0.10 94.85 13:15:01 2 3.36 0.00 0.43 0.00 0.08 96.12 13:15:01 3 5.08 0.00 0.40 0.00 0.08 94.44 13:16:01 all 0.86 0.00 0.31 0.01 0.10 98.73 13:16:01 0 1.01 0.00 0.45 0.02 0.12 98.41 13:16:01 1 0.72 0.00 0.18 0.00 0.08 99.02 13:16:01 2 0.85 0.00 0.32 0.02 0.10 98.71 13:16:01 3 0.85 0.00 0.28 0.00 0.10 98.77 13:17:01 all 1.18 0.00 0.38 0.03 0.10 98.31 13:17:01 0 0.95 0.00 0.37 0.05 0.12 98.52 13:17:01 1 1.03 0.00 0.48 0.00 0.10 98.38 13:17:01 2 0.90 0.00 0.27 0.07 0.08 98.68 13:17:01 3 1.83 0.00 0.40 0.00 0.10 97.67 13:18:01 all 26.46 0.00 1.37 0.47 0.12 71.58 13:18:01 0 25.27 0.00 1.40 0.90 0.13 72.29 13:18:01 1 23.43 0.00 0.82 0.00 0.08 75.67 13:18:01 2 23.74 0.00 1.29 0.32 0.12 74.54 13:18:01 3 33.39 0.00 1.97 0.67 0.13 63.84 13:19:01 all 42.03 0.00 1.37 1.13 0.10 55.37 13:19:01 0 43.40 0.00 1.47 0.37 0.10 54.66 13:19:01 1 43.07 0.00 1.75 2.39 0.10 52.69 13:19:01 2 37.91 0.00 1.50 1.70 0.10 58.79 13:19:01 3 43.73 0.00 0.76 0.07 0.10 55.34 13:20:01 all 3.87 0.00 0.24 0.01 0.08 95.81 13:20:01 0 4.46 0.00 0.25 0.02 0.07 95.21 13:20:01 1 3.69 0.00 0.25 0.00 0.08 95.98 13:20:01 2 3.42 0.00 0.22 0.02 0.07 96.28 13:20:01 3 3.92 0.00 0.23 0.00 0.08 95.76 13:21:01 all 2.42 0.00 0.23 0.01 0.09 97.26 13:21:01 0 2.39 0.00 0.23 0.00 0.10 97.27 13:21:01 1 2.68 0.00 0.25 0.00 0.08 96.99 13:21:01 2 2.42 0.00 0.23 0.03 0.10 97.21 13:21:01 3 2.19 0.00 0.18 0.00 0.08 97.54 13:22:01 all 37.87 0.00 1.17 0.31 0.10 60.56 13:22:01 0 38.44 0.00 0.95 0.02 0.10 60.49 13:22:01 1 38.90 0.00 1.40 0.22 0.10 59.38 13:22:01 2 36.76 0.00 1.22 0.02 0.08 61.92 13:22:01 3 37.39 0.00 1.09 0.97 0.10 60.46 13:23:01 all 4.88 0.00 0.32 0.02 0.09 94.69 13:23:01 0 4.89 0.00 0.32 0.02 0.07 94.71 13:23:01 1 4.53 0.00 0.38 0.00 0.10 94.99 13:23:01 2 5.04 0.00 0.34 0.00 0.10 94.52 13:23:01 3 5.08 0.00 0.25 0.05 0.08 94.54 13:24:01 all 31.69 0.00 1.07 0.29 0.09 66.85 13:24:01 0 32.00 0.00 1.25 0.37 0.10 66.28 13:24:01 1 29.62 0.00 0.84 0.02 0.10 69.42 13:24:01 2 32.59 0.00 0.87 0.02 0.08 66.43 13:24:01 3 32.56 0.00 1.32 0.75 0.08 65.28 13:24:01 CPU %user %nice %system %iowait %steal %idle 13:25:01 all 33.15 0.00 1.19 0.14 0.09 65.42 13:25:01 0 33.02 0.00 1.77 0.07 0.08 65.06 13:25:01 1 37.99 0.00 1.31 0.13 0.08 60.49 13:25:01 2 29.90 0.00 0.88 0.02 0.10 69.10 13:25:01 3 31.73 0.00 0.80 0.35 0.10 67.02 13:26:01 all 29.19 0.00 1.03 0.34 0.09 69.34 13:26:01 0 29.34 0.00 0.92 0.47 0.10 69.18 13:26:01 1 27.20 0.00 1.09 0.35 0.08 71.29 13:26:01 2 29.66 0.00 1.01 0.02 0.10 69.21 13:26:01 3 30.57 0.00 1.12 0.54 0.08 67.69 13:27:01 all 47.87 0.00 1.43 0.33 0.09 50.28 13:27:01 0 49.41 0.00 1.51 0.00 0.10 48.99 13:27:01 1 45.11 0.00 1.62 0.02 0.10 53.15 13:27:01 2 46.84 0.00 1.27 0.00 0.10 51.79 13:27:01 3 50.11 0.00 1.31 1.32 0.07 47.20 13:28:01 all 17.53 0.00 0.74 0.26 0.10 81.37 13:28:01 0 17.10 0.00 0.77 0.43 0.08 81.61 13:28:01 1 17.82 0.00 0.90 0.00 0.10 81.18 13:28:01 2 16.79 0.00 0.48 0.40 0.10 82.23 13:28:01 3 18.39 0.00 0.82 0.22 0.10 80.47 13:29:01 all 19.06 0.00 0.79 0.02 0.08 80.04 13:29:01 0 20.90 0.00 0.92 0.02 0.07 78.10 13:29:01 1 19.58 0.00 0.74 0.00 0.10 79.58 13:29:01 2 20.18 0.00 0.59 0.03 0.08 79.12 13:29:01 3 15.58 0.00 0.92 0.03 0.08 83.38 13:30:01 all 29.71 0.00 0.99 0.27 0.10 68.94 13:30:01 0 27.96 0.00 1.14 0.02 0.12 70.77 13:30:01 1 31.29 0.00 0.76 0.47 0.08 67.41 13:30:01 2 30.02 0.00 1.14 0.59 0.08 68.17 13:30:01 3 29.59 0.00 0.91 0.00 0.10 69.40 13:31:01 all 10.36 0.00 0.29 0.25 0.08 89.01 13:31:01 0 8.80 0.00 0.18 0.00 0.05 90.96 13:31:01 1 10.98 0.00 0.21 0.02 0.07 88.72 13:31:01 2 10.98 0.00 0.48 1.00 0.10 87.43 13:31:01 3 10.67 0.00 0.29 0.00 0.10 88.95 13:32:01 all 26.02 0.00 0.86 0.03 0.07 73.01 13:32:01 0 25.09 0.00 1.10 0.05 0.05 73.71 13:32:01 1 28.11 0.00 1.01 0.02 0.07 70.80 13:32:01 2 26.66 0.00 0.77 0.05 0.08 72.44 13:32:01 3 24.23 0.00 0.57 0.02 0.08 75.10 13:33:01 all 42.05 0.00 1.32 0.59 0.10 55.94 13:33:01 0 39.42 0.00 1.34 0.47 0.12 58.66 13:33:01 1 43.01 0.00 1.19 0.02 0.10 55.69 13:33:01 2 43.53 0.00 1.44 0.60 0.08 54.34 13:33:01 3 42.25 0.00 1.31 1.26 0.08 55.09 13:34:01 all 23.68 0.00 0.88 0.03 0.09 75.33 13:34:01 0 24.37 0.00 0.85 0.08 0.10 74.59 13:34:01 1 22.49 0.00 0.81 0.00 0.08 76.62 13:34:01 2 25.00 0.00 0.97 0.00 0.08 73.94 13:34:01 3 22.84 0.00 0.90 0.02 0.08 76.16 13:35:01 all 19.77 0.00 0.55 0.29 0.10 79.29 13:35:01 0 19.37 0.00 0.37 0.00 0.10 80.16 13:35:01 1 19.25 0.00 0.52 0.07 0.08 80.08 13:35:01 2 20.96 0.00 0.42 0.00 0.10 78.52 13:35:01 3 19.49 0.00 0.89 1.10 0.10 78.42 13:35:01 CPU %user %nice %system %iowait %steal %idle 13:36:01 all 37.43 0.00 1.21 0.02 0.09 61.25 13:36:01 0 35.26 0.00 1.11 0.02 0.08 63.53 13:36:01 1 38.80 0.00 1.40 0.00 0.08 59.71 13:36:01 2 37.18 0.00 1.07 0.03 0.10 61.61 13:36:01 3 38.46 0.00 1.26 0.03 0.10 60.15 13:37:01 all 16.87 0.00 0.47 0.22 0.09 82.35 13:37:01 0 16.25 0.00 0.42 0.00 0.07 83.26 13:37:01 1 16.08 0.00 0.57 0.44 0.10 82.81 13:37:01 2 17.66 0.00 0.37 0.00 0.10 81.87 13:37:01 3 17.48 0.00 0.50 0.45 0.08 81.48 13:38:01 all 3.48 0.00 0.24 0.02 0.08 96.18 13:38:01 0 3.39 0.00 0.18 0.00 0.05 96.37 13:38:01 1 3.55 0.00 0.30 0.00 0.08 96.07 13:38:01 2 3.48 0.00 0.24 0.00 0.08 96.20 13:38:01 3 3.48 0.00 0.25 0.07 0.12 96.08 13:39:01 all 38.30 0.00 1.33 0.03 0.09 60.25 13:39:01 0 36.79 0.00 1.32 0.02 0.08 61.78 13:39:01 1 41.66 0.00 1.59 0.00 0.08 56.67 13:39:01 2 38.90 0.00 1.40 0.02 0.10 59.58 13:39:01 3 35.86 0.00 1.02 0.07 0.08 62.97 13:40:01 all 18.49 0.00 0.53 0.25 0.10 80.62 13:40:01 0 18.68 0.00 0.38 0.00 0.10 80.84 13:40:01 1 18.51 0.00 0.63 0.48 0.12 80.26 13:40:01 2 17.42 0.00 0.48 0.10 0.08 81.91 13:40:01 3 19.37 0.00 0.64 0.43 0.10 79.46 13:41:01 all 2.85 0.00 0.24 0.02 0.08 96.81 13:41:01 0 2.60 0.00 0.22 0.00 0.05 97.13 13:41:01 1 2.35 0.00 0.22 0.03 0.08 97.31 13:41:01 2 3.82 0.00 0.23 0.03 0.08 95.83 13:41:01 3 2.64 0.00 0.30 0.00 0.10 96.96 13:42:01 all 2.19 0.00 0.24 0.15 0.08 97.35 13:42:01 0 2.32 0.00 0.25 0.00 0.07 97.37 13:42:01 1 2.25 0.00 0.18 0.58 0.07 96.91 13:42:01 2 1.93 0.00 0.27 0.00 0.10 97.70 13:42:01 3 2.25 0.00 0.27 0.00 0.07 97.41 13:43:01 all 2.69 0.00 0.32 0.01 0.09 96.88 13:43:01 0 2.86 0.00 0.35 0.00 0.08 96.71 13:43:01 1 2.58 0.00 0.28 0.03 0.08 97.02 13:43:01 2 2.59 0.00 0.27 0.00 0.08 97.06 13:43:01 3 2.74 0.00 0.38 0.02 0.12 96.75 13:44:01 all 2.22 0.00 0.27 0.01 0.09 97.41 13:44:01 0 2.65 0.00 0.30 0.00 0.08 96.97 13:44:01 1 1.78 0.00 0.22 0.03 0.07 97.90 13:44:01 2 2.24 0.00 0.22 0.02 0.10 97.43 13:44:01 3 2.20 0.00 0.33 0.00 0.12 97.34 13:45:01 all 1.24 0.00 0.23 0.00 0.08 98.45 13:45:01 0 0.94 0.00 0.12 0.00 0.07 98.88 13:45:01 1 1.32 0.00 0.23 0.02 0.07 98.37 13:45:01 2 1.47 0.00 0.32 0.00 0.10 98.11 13:45:01 3 1.22 0.00 0.23 0.00 0.08 98.46 13:46:01 all 1.82 0.00 0.24 0.05 0.08 97.80 13:46:01 0 1.46 0.00 0.20 0.00 0.08 98.26 13:46:01 1 1.98 0.00 0.38 0.05 0.08 97.50 13:46:01 2 2.07 0.00 0.28 0.02 0.10 97.53 13:46:01 3 1.77 0.00 0.10 0.13 0.07 97.93 13:46:01 CPU %user %nice %system %iowait %steal %idle 13:47:01 all 30.81 0.00 1.12 0.05 0.09 67.93 13:47:01 0 31.88 0.00 1.37 0.17 0.08 66.50 13:47:01 1 27.71 0.00 0.97 0.03 0.08 71.20 13:47:01 2 33.01 0.00 1.09 0.00 0.08 65.82 13:47:01 3 30.66 0.00 1.05 0.00 0.10 68.19 13:48:01 all 24.67 0.00 0.70 0.33 0.10 74.20 13:48:01 0 23.98 0.00 0.77 0.30 0.08 74.87 13:48:01 1 24.50 0.00 0.80 0.98 0.10 73.61 13:48:01 2 25.13 0.00 0.55 0.02 0.10 74.20 13:48:01 3 25.08 0.00 0.68 0.02 0.10 74.12 13:49:01 all 5.41 0.00 0.35 0.01 0.09 94.14 13:49:01 0 5.05 0.00 0.40 0.02 0.10 94.44 13:49:01 1 5.05 0.00 0.37 0.02 0.08 94.48 13:49:01 2 5.02 0.00 0.32 0.00 0.08 94.58 13:49:01 3 6.52 0.00 0.33 0.00 0.08 93.07 13:50:01 all 3.59 0.00 0.27 0.02 0.09 96.03 13:50:01 0 3.89 0.00 0.27 0.03 0.08 95.72 13:50:01 1 3.78 0.00 0.22 0.03 0.08 95.89 13:50:01 2 3.42 0.00 0.30 0.00 0.08 96.19 13:50:01 3 3.28 0.00 0.30 0.00 0.10 96.32 13:51:01 all 2.60 0.00 0.25 0.01 0.11 97.03 13:51:01 0 3.52 0.00 0.13 0.03 0.10 96.22 13:51:01 1 2.40 0.00 0.30 0.00 0.12 97.18 13:51:01 2 2.26 0.00 0.25 0.00 0.08 97.41 13:51:01 3 2.19 0.00 0.32 0.02 0.13 97.34 13:52:01 all 2.79 0.00 0.26 0.02 0.07 96.85 13:52:01 0 3.61 0.00 0.15 0.02 0.07 96.16 13:52:01 1 2.35 0.00 0.32 0.02 0.08 97.23 13:52:01 2 2.43 0.00 0.17 0.07 0.07 97.27 13:52:01 3 2.75 0.00 0.40 0.00 0.08 96.77 13:53:01 all 3.79 0.00 0.29 0.02 0.09 95.81 13:53:01 0 3.54 0.00 0.23 0.00 0.08 96.15 13:53:01 1 3.57 0.00 0.40 0.03 0.10 95.90 13:53:01 2 4.03 0.00 0.30 0.03 0.10 95.53 13:53:01 3 4.03 0.00 0.23 0.00 0.08 95.66 13:54:01 all 3.35 0.00 0.33 0.01 0.08 96.22 13:54:01 0 3.06 0.00 0.33 0.00 0.08 96.53 13:54:01 1 2.82 0.00 0.42 0.00 0.07 96.69 13:54:01 2 3.46 0.00 0.23 0.05 0.08 96.17 13:54:01 3 4.07 0.00 0.35 0.00 0.08 95.50 13:55:01 all 55.43 0.00 1.64 0.23 0.10 42.61 13:55:01 0 52.90 0.00 1.50 0.45 0.10 45.05 13:55:01 1 55.35 0.00 1.33 0.02 0.10 43.20 13:55:01 2 59.80 0.00 1.64 0.47 0.08 38.01 13:55:01 3 53.67 0.00 2.08 0.00 0.10 44.15 13:56:01 all 10.10 0.00 0.42 0.02 0.09 89.37 13:56:01 0 10.84 0.00 0.38 0.05 0.10 88.63 13:56:01 1 9.95 0.00 0.55 0.00 0.08 89.42 13:56:01 2 9.20 0.00 0.42 0.03 0.08 90.26 13:56:01 3 10.40 0.00 0.33 0.00 0.10 89.17 13:57:01 all 8.88 0.00 0.38 0.01 0.08 90.64 13:57:01 0 8.83 0.00 0.38 0.03 0.07 90.69 13:57:01 1 9.48 0.00 0.45 0.00 0.10 89.97 13:57:01 2 8.94 0.00 0.33 0.00 0.08 90.64 13:57:01 3 8.28 0.00 0.37 0.00 0.08 91.27 13:57:01 CPU %user %nice %system %iowait %steal %idle 13:58:01 all 3.11 0.00 0.28 0.02 0.07 96.52 13:58:01 0 2.98 0.00 0.38 0.03 0.08 96.52 13:58:01 1 2.59 0.00 0.30 0.00 0.08 97.03 13:58:01 2 2.40 0.00 0.23 0.03 0.05 97.28 13:58:01 3 4.46 0.00 0.20 0.00 0.08 95.26 13:59:01 all 3.69 0.00 0.34 0.01 0.09 95.87 13:59:01 0 2.92 0.00 0.35 0.02 0.08 96.63 13:59:01 1 3.52 0.00 0.35 0.00 0.08 96.05 13:59:01 2 3.27 0.00 0.37 0.02 0.10 96.25 13:59:01 3 5.04 0.00 0.30 0.00 0.08 94.58 14:00:01 all 3.62 0.00 0.29 0.02 0.10 95.98 14:00:01 0 3.07 0.00 0.22 0.03 0.10 96.58 14:00:01 1 3.15 0.00 0.27 0.00 0.08 96.50 14:00:01 2 3.11 0.00 0.33 0.03 0.10 96.42 14:00:01 3 5.11 0.00 0.34 0.00 0.11 94.43 14:01:01 all 2.93 0.00 0.28 0.01 0.09 96.69 14:01:01 0 3.48 0.00 0.27 0.02 0.08 96.16 14:01:01 1 2.52 0.00 0.28 0.00 0.10 97.10 14:01:01 2 2.45 0.00 0.33 0.00 0.10 97.11 14:01:01 3 3.26 0.00 0.25 0.02 0.08 96.39 14:02:01 all 2.25 0.00 0.26 0.02 0.08 97.39 14:02:01 0 3.07 0.00 0.15 0.03 0.08 96.66 14:02:01 1 2.00 0.00 0.28 0.02 0.08 97.62 14:02:01 2 2.12 0.00 0.47 0.02 0.10 97.30 14:02:01 3 1.82 0.00 0.13 0.00 0.07 97.98 14:03:01 all 47.76 0.00 1.62 0.16 0.10 50.36 14:03:01 0 43.66 0.00 1.44 0.33 0.12 54.45 14:03:01 1 50.12 0.00 1.99 0.20 0.10 47.59 14:03:01 2 48.81 0.00 1.57 0.00 0.10 49.51 14:03:01 3 48.43 0.00 1.49 0.12 0.10 49.87 14:04:01 all 14.91 0.00 0.44 0.15 0.09 84.41 14:04:01 0 13.91 0.00 0.52 0.45 0.07 85.05 14:04:01 1 14.76 0.00 0.40 0.15 0.10 84.58 14:04:01 2 16.04 0.00 0.40 0.02 0.10 83.44 14:04:01 3 14.91 0.00 0.43 0.00 0.10 84.55 14:05:01 all 10.54 0.00 0.34 0.01 0.09 89.01 14:05:01 0 9.91 0.00 0.37 0.03 0.10 89.59 14:05:01 1 10.25 0.00 0.28 0.02 0.08 89.37 14:05:01 2 10.68 0.00 0.43 0.00 0.08 88.80 14:05:01 3 11.34 0.00 0.29 0.00 0.08 88.29 14:06:01 all 3.33 0.00 0.17 0.07 0.09 96.34 14:06:01 0 4.45 0.00 0.18 0.25 0.10 95.01 14:06:01 1 3.54 0.00 0.18 0.03 0.08 96.16 14:06:01 2 2.49 0.00 0.23 0.00 0.10 97.17 14:06:01 3 2.84 0.00 0.07 0.00 0.07 97.02 14:07:01 all 1.92 0.00 0.18 0.00 0.08 97.81 14:07:01 0 2.86 0.00 0.29 0.02 0.12 96.72 14:07:01 1 1.71 0.00 0.15 0.00 0.07 98.08 14:07:01 2 1.47 0.00 0.22 0.00 0.08 98.23 14:07:01 3 1.65 0.00 0.08 0.00 0.05 98.21 14:08:01 all 2.19 0.00 0.16 0.02 0.09 97.53 14:08:01 0 2.67 0.00 0.27 0.05 0.10 96.91 14:08:01 1 3.15 0.00 0.13 0.02 0.07 96.63 14:08:01 2 1.57 0.00 0.15 0.02 0.08 98.18 14:08:01 3 1.37 0.00 0.10 0.00 0.10 98.43 14:08:01 CPU %user %nice %system %iowait %steal %idle 14:09:01 all 1.89 0.00 0.14 0.01 0.08 97.87 14:09:01 0 2.31 0.00 0.20 0.02 0.08 97.39 14:09:01 1 2.39 0.00 0.13 0.00 0.08 97.40 14:09:01 2 1.54 0.00 0.13 0.03 0.08 98.21 14:09:01 3 1.34 0.00 0.10 0.00 0.07 98.50 14:10:01 all 1.76 0.00 0.18 0.01 0.07 97.98 14:10:01 0 1.99 0.00 0.27 0.03 0.08 97.63 14:10:01 1 1.42 0.00 0.15 0.00 0.05 98.38 14:10:01 2 1.78 0.00 0.18 0.02 0.08 97.94 14:10:01 3 1.86 0.00 0.12 0.00 0.05 97.98 14:11:01 all 1.32 0.00 0.16 0.02 0.08 98.43 14:11:01 0 1.34 0.00 0.18 0.03 0.10 98.34 14:11:01 1 1.22 0.00 0.17 0.00 0.08 98.53 14:11:01 2 1.71 0.00 0.17 0.03 0.08 98.01 14:11:01 3 1.01 0.00 0.10 0.00 0.05 98.84 14:12:01 all 2.30 0.00 0.21 0.02 0.07 97.40 14:12:01 0 2.40 0.00 0.28 0.03 0.07 97.22 14:12:01 1 2.41 0.00 0.25 0.00 0.07 97.27 14:12:01 2 2.27 0.00 0.24 0.05 0.08 97.36 14:12:01 3 2.11 0.00 0.08 0.00 0.07 97.73 14:13:01 all 49.51 0.00 1.78 0.76 0.10 47.85 14:13:01 0 48.99 0.00 1.44 0.07 0.10 49.41 14:13:01 1 50.07 0.00 2.06 0.44 0.10 47.34 14:13:01 2 44.19 0.00 1.40 0.38 0.12 53.90 14:13:01 3 54.80 0.00 2.21 2.15 0.10 40.74 14:14:01 all 13.46 0.00 0.52 0.02 0.09 85.91 14:14:01 0 13.00 0.00 0.50 0.00 0.08 86.42 14:14:01 1 12.94 0.00 0.53 0.00 0.08 86.44 14:14:01 2 14.18 0.00 0.60 0.00 0.10 85.13 14:14:01 3 13.73 0.00 0.43 0.10 0.08 85.65 14:15:01 all 3.21 0.00 0.28 0.01 0.08 96.42 14:15:01 0 3.22 0.00 0.23 0.00 0.07 96.48 14:15:01 1 3.30 0.00 0.32 0.00 0.07 96.31 14:15:01 2 3.07 0.00 0.22 0.00 0.10 96.61 14:15:01 3 3.26 0.00 0.33 0.05 0.10 96.26 14:16:01 all 0.52 0.00 0.23 0.02 0.08 99.15 14:16:01 0 0.35 0.00 0.15 0.00 0.05 99.45 14:16:01 1 0.22 0.00 0.13 0.00 0.08 99.57 14:16:01 2 0.42 0.00 0.15 0.00 0.08 99.35 14:16:01 3 1.10 0.00 0.48 0.07 0.12 98.24 14:17:01 all 0.50 0.00 0.22 0.01 0.08 99.20 14:17:01 0 0.35 0.00 0.10 0.00 0.08 99.47 14:17:01 1 0.63 0.00 0.35 0.00 0.10 98.92 14:17:01 2 0.30 0.00 0.12 0.00 0.05 99.53 14:17:01 3 0.70 0.00 0.32 0.03 0.08 98.87 14:18:01 all 20.44 0.00 0.77 0.10 0.09 78.60 14:18:01 0 20.48 0.00 1.12 0.05 0.08 78.27 14:18:01 1 19.27 0.00 0.62 0.15 0.10 79.86 14:18:01 2 21.98 0.00 0.52 0.17 0.08 77.25 14:18:01 3 20.02 0.00 0.83 0.03 0.08 79.03 14:19:01 all 35.04 0.00 1.00 0.25 0.09 63.62 14:19:01 0 36.79 0.00 1.12 0.28 0.10 61.71 14:19:01 1 35.12 0.00 0.80 0.57 0.07 63.44 14:19:01 2 37.41 0.00 1.25 0.17 0.10 61.07 14:19:01 3 30.90 0.00 0.83 0.00 0.08 68.20 14:19:01 CPU %user %nice %system %iowait %steal %idle 14:20:01 all 3.75 0.00 0.24 0.01 0.08 95.92 14:20:01 0 3.25 0.00 0.25 0.00 0.08 96.42 14:20:01 1 3.25 0.00 0.22 0.02 0.08 96.43 14:20:01 2 3.39 0.00 0.28 0.02 0.08 96.23 14:20:01 3 5.09 0.00 0.21 0.00 0.08 94.61 14:21:01 all 3.83 0.00 0.31 0.01 0.10 95.76 14:21:01 0 2.99 0.00 0.32 0.00 0.08 96.61 14:21:01 1 3.98 0.00 0.32 0.03 0.12 95.56 14:21:01 2 3.44 0.00 0.33 0.00 0.10 96.13 14:21:01 3 4.88 0.00 0.26 0.00 0.08 94.77 14:22:01 all 1.27 0.00 0.26 0.02 0.08 98.37 14:22:01 0 1.20 0.00 0.33 0.00 0.08 98.39 14:22:01 1 1.39 0.00 0.32 0.05 0.08 98.16 14:22:01 2 1.10 0.00 0.22 0.02 0.08 98.58 14:22:01 3 1.39 0.00 0.18 0.00 0.08 98.34 14:23:01 all 3.86 0.00 0.27 0.01 0.09 95.77 14:23:01 0 3.20 0.00 0.38 0.00 0.08 96.33 14:23:01 1 5.48 0.00 0.21 0.02 0.08 94.21 14:23:01 2 3.57 0.00 0.32 0.02 0.10 96.00 14:23:01 3 3.17 0.00 0.18 0.00 0.08 96.56 14:24:01 all 2.44 0.00 0.27 0.01 0.07 97.21 14:24:01 0 1.77 0.00 0.25 0.00 0.08 97.90 14:24:01 1 4.01 0.00 0.26 0.03 0.07 95.63 14:24:01 2 2.14 0.00 0.38 0.02 0.08 97.38 14:24:01 3 1.80 0.00 0.18 0.00 0.07 97.95 14:25:01 all 11.86 0.00 0.69 0.04 0.09 87.32 14:25:01 0 12.27 0.00 0.80 0.00 0.07 86.86 14:25:01 1 12.66 0.00 0.57 0.02 0.08 86.67 14:25:01 2 11.43 0.00 0.72 0.02 0.10 87.74 14:25:01 3 11.08 0.00 0.69 0.13 0.10 88.00 14:26:01 all 52.09 0.00 1.32 0.27 0.10 46.21 14:26:01 0 53.29 0.00 1.45 0.35 0.10 44.81 14:26:01 1 50.32 0.00 1.02 0.18 0.12 48.37 14:26:01 2 52.29 0.00 1.60 0.43 0.10 45.57 14:26:01 3 52.49 0.00 1.21 0.10 0.10 46.10 14:27:01 all 6.86 0.00 0.36 0.02 0.08 92.68 14:27:01 0 7.43 0.00 0.42 0.00 0.10 92.05 14:27:01 1 6.98 0.00 0.27 0.00 0.05 92.71 14:27:01 2 6.80 0.00 0.36 0.05 0.08 92.70 14:27:01 3 6.22 0.00 0.38 0.02 0.10 93.28 14:28:01 all 6.45 0.00 0.35 0.02 0.08 93.10 14:28:01 0 7.58 0.00 0.25 0.00 0.07 92.11 14:28:01 1 5.94 0.00 0.43 0.02 0.08 93.52 14:28:01 2 6.39 0.00 0.47 0.03 0.08 93.03 14:28:01 3 5.89 0.00 0.23 0.02 0.08 93.77 14:29:01 all 5.89 0.00 0.35 0.02 0.09 93.66 14:29:01 0 5.48 0.00 0.27 0.00 0.10 94.16 14:29:01 1 5.86 0.00 0.40 0.02 0.10 93.62 14:29:01 2 5.40 0.00 0.42 0.05 0.10 94.03 14:29:01 3 6.81 0.00 0.30 0.00 0.07 92.83 14:30:01 all 1.20 0.00 0.22 0.00 0.07 98.51 14:30:01 0 0.72 0.00 0.37 0.00 0.10 98.82 14:30:01 1 0.48 0.00 0.08 0.00 0.03 99.40 14:30:01 2 1.02 0.00 0.33 0.02 0.10 98.53 14:30:01 3 2.55 0.00 0.08 0.00 0.05 97.32 14:30:01 CPU %user %nice %system %iowait %steal %idle 14:31:01 all 0.77 0.00 0.25 0.01 0.09 98.88 14:31:01 0 0.28 0.00 0.23 0.00 0.08 99.40 14:31:01 1 0.70 0.00 0.32 0.00 0.08 98.90 14:31:01 2 0.65 0.00 0.30 0.02 0.12 98.92 14:31:01 3 1.46 0.00 0.13 0.03 0.07 98.31 14:32:01 all 0.39 0.00 0.21 0.01 0.08 99.30 14:32:01 0 0.62 0.00 0.28 0.00 0.08 99.02 14:32:01 1 0.30 0.00 0.15 0.02 0.07 99.47 14:32:01 2 0.23 0.00 0.15 0.02 0.07 99.53 14:32:01 3 0.42 0.00 0.27 0.00 0.12 99.19 14:33:01 all 25.35 0.00 1.55 0.84 0.08 72.17 14:33:01 0 26.57 0.00 2.10 0.25 0.07 71.01 14:33:01 1 15.69 0.00 1.07 2.05 0.08 81.10 14:33:01 2 31.90 0.00 1.29 0.84 0.08 65.90 14:33:01 3 27.27 0.00 1.76 0.22 0.08 70.67 Average: all 18.15 0.14 0.87 0.37 0.09 80.38 Average: 0 17.94 0.14 0.92 0.38 0.09 80.53 Average: 1 18.14 0.11 0.87 0.31 0.09 80.47 Average: 2 18.22 0.16 0.86 0.41 0.09 80.25 Average: 3 18.30 0.13 0.83 0.38 0.09 80.27