Triggered by Gerrit: https://git.opendaylight.org/gerrit/c/transportpce/+/113713 Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu2004-docker-4c-16g-28080 (ubuntu2004-docker-4c-16g) in workspace /w/workspace/transportpce-tox-verify-transportpce-master [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-OwvJ68ta7ve6/agent.12174 SSH_AGENT_PID=12175 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/transportpce-tox-verify-transportpce-master@tmp/private_key_8372849620623361258.key (/w/workspace/transportpce-tox-verify-transportpce-master@tmp/private_key_8372849620623361258.key) [ssh-agent] Using credentials jenkins (jenkins-ssh) The recommended git tool is: NONE using credential jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://devvexx.opendaylight.org/mirror/transportpce > git init /w/workspace/transportpce-tox-verify-transportpce-master # timeout=10 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/transportpce > git --version # timeout=10 > git --version # 'git version 2.25.1' using GIT_SSH to set credentials jenkins-ssh Verifying host key using known hosts file You're using 'Known hosts file' strategy to verify ssh host keys, but your known_hosts file does not exist, please go to 'Manage Jenkins' -> 'Security' -> 'Git Host Key Verification Configuration' and configure host key verification. > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/transportpce +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/transportpce # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url git://devvexx.opendaylight.org/mirror/transportpce # timeout=10 Fetching upstream changes from git://devvexx.opendaylight.org/mirror/transportpce using GIT_SSH to set credentials jenkins-ssh Verifying host key using known hosts file You're using 'Known hosts file' strategy to verify ssh host keys, but your known_hosts file does not exist, please go to 'Manage Jenkins' -> 'Security' -> 'Git Host Key Verification Configuration' and configure host key verification. > git fetch --tags --force --progress -- git://devvexx.opendaylight.org/mirror/transportpce refs/changes/13/113713/1 # timeout=10 > git rev-parse b98e56d16f3935af5b1f0dc784d9fa9928d1b357^{commit} # timeout=10 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script Checking out Revision b98e56d16f3935af5b1f0dc784d9fa9928d1b357 (refs/changes/13/113713/1) > git config core.sparsecheckout # timeout=10 > git checkout -f b98e56d16f3935af5b1f0dc784d9fa9928d1b357 # timeout=10 Commit message: "Yang file to augment OR topology" > git rev-parse FETCH_HEAD^{commit} # timeout=10 > git rev-list --no-walk ee77c94d73171d4eb7a8f85abdfd387dc14ffd3a # timeout=10 > git remote # timeout=10 > git submodule init # timeout=10 > git submodule sync # timeout=10 > git config --get remote.origin.url # timeout=10 > git submodule init # timeout=10 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 ERROR: No submodules found. provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins185854976806533893.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.13 (set by /opt/pyenv/version) * 3.11.7 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-9VJz lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-9VJz/bin to PATH Generating Requirements File Python 3.11.7 pip 24.2 from /tmp/venv-9VJz/lib/python3.11/site-packages/pip (python 3.11) appdirs==1.4.4 argcomplete==3.5.0 aspy.yaml==1.3.0 attrs==24.2.0 autopage==0.5.2 beautifulsoup4==4.12.3 boto3==1.35.27 botocore==1.35.27 bs4==0.0.2 cachetools==5.5.0 certifi==2024.8.30 cffi==1.17.1 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 cliff==4.7.0 cmd2==2.4.3 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.8 dnspython==2.6.1 docker==4.2.2 dogpile.cache==1.3.3 durationpy==0.7 email_validator==2.2.0 filelock==3.16.1 future==1.0.0 gitdb==4.0.11 GitPython==3.1.43 google-auth==2.35.0 httplib2==0.22.0 identify==2.6.1 idna==3.10 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.4 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.23.0 jsonschema-specifications==2023.12.1 keystoneauth1==5.8.0 kubernetes==31.0.0 lftools==0.37.10 lxml==5.3.0 MarkupSafe==2.1.5 msgpack==1.1.0 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 netifaces==0.11.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==4.0.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.1.0 oslo.config==9.6.0 oslo.context==5.6.0 oslo.i18n==6.4.0 oslo.log==6.1.2 oslo.serialization==5.5.0 oslo.utils==7.3.0 packaging==24.1 pbr==6.1.0 platformdirs==4.3.6 prettytable==3.11.0 pyasn1==0.6.1 pyasn1_modules==0.4.1 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.4.0 PyJWT==2.9.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.9.0 pyrsistent==0.20.0 python-cinderclient==9.6.0 python-dateutil==2.9.0.post0 python-heatclient==4.0.0 python-jenkins==1.8.2 python-keystoneclient==5.5.0 python-magnumclient==4.7.0 python-openstackclient==7.1.2 python-swiftclient==4.6.0 PyYAML==6.0.2 referencing==0.35.1 requests==2.32.3 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.20.0 rsa==4.9 ruamel.yaml==0.18.6 ruamel.yaml.clib==0.2.8 s3transfer==0.10.2 simplejson==3.19.3 six==1.16.0 smmap==5.0.1 soupsieve==2.6 stevedore==5.3.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.13.2 tqdm==4.66.5 typing_extensions==4.12.2 tzdata==2024.2 urllib3==1.26.20 virtualenv==20.26.5 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.13.0 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content PYTHON=python3 [EnvInject] - Variables injected successfully. [transportpce-tox-verify-transportpce-master] $ /bin/bash -l /tmp/jenkins7496605565756347290.sh ---> tox-install.sh + source /home/jenkins/lf-env.sh + lf-activate-venv --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 ++ mktemp -d /tmp/venv-XXXX + lf_venv=/tmp/venv-T2os + local venv_file=/tmp/.os_lf_venv + local python=python3 + local options + local set_path=true + local install_args= ++ getopt -o np:v: -l no-path,system-site-packages,python:,venv-file: -n lf-activate-venv -- --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 + options=' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' + eval set -- ' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' ++ set -- --venv-file /tmp/.toxenv -- tox virtualenv urllib3~=1.26.15 + true + case $1 in + venv_file=/tmp/.toxenv + shift 2 + true + case $1 in + shift + break + case $python in + local pkg_list= + [[ -d /opt/pyenv ]] + echo 'Setup pyenv:' Setup pyenv: + export PYENV_ROOT=/opt/pyenv + PYENV_ROOT=/opt/pyenv + export PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin + PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin + pyenv versions system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) + command -v pyenv ++ pyenv init - --no-rehash + eval 'PATH="$(bash --norc -ec '\''IFS=:; paths=($PATH); for i in ${!paths[@]}; do if [[ ${paths[i]} == "'\'''\''/opt/pyenv/shims'\'''\''" ]]; then unset '\''\'\'''\''paths[i]'\''\'\'''\''; fi; done; echo "${paths[*]}"'\'')" export PATH="/opt/pyenv/shims:${PATH}" export PYENV_SHELL=bash source '\''/opt/pyenv/libexec/../completions/pyenv.bash'\'' pyenv() { local command command="${1:-}" if [ "$#" -gt 0 ]; then shift fi case "$command" in rehash|shell) eval "$(pyenv "sh-$command" "$@")" ;; *) command pyenv "$command" "$@" ;; esac }' +++ bash --norc -ec 'IFS=:; paths=($PATH); for i in ${!paths[@]}; do if [[ ${paths[i]} == "/opt/pyenv/shims" ]]; then unset '\''paths[i]'\''; fi; done; echo "${paths[*]}"' ++ PATH=/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin ++ export PATH=/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin ++ PATH=/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin ++ export PYENV_SHELL=bash ++ PYENV_SHELL=bash ++ source /opt/pyenv/libexec/../completions/pyenv.bash +++ complete -F _pyenv pyenv ++ lf-pyver python3 ++ local py_version_xy=python3 ++ local py_version_xyz= ++ pyenv versions ++ local command ++ command=versions ++ '[' 1 -gt 0 ']' ++ shift ++ case "$command" in ++ command pyenv versions ++ pyenv versions ++ sed 's/^[ *]* //' ++ awk '{ print $1 }' ++ grep -E '^[0-9.]*[0-9]$' ++ [[ ! -s /tmp/.pyenv_versions ]] +++ grep '^3' /tmp/.pyenv_versions +++ sort -V +++ tail -n 1 ++ py_version_xyz=3.11.7 ++ [[ -z 3.11.7 ]] ++ echo 3.11.7 ++ return 0 + pyenv local 3.11.7 + local command + command=local + '[' 2 -gt 0 ']' + shift + case "$command" in + command pyenv local 3.11.7 + pyenv local 3.11.7 + for arg in "$@" + case $arg in + pkg_list+='tox ' + for arg in "$@" + case $arg in + pkg_list+='virtualenv ' + for arg in "$@" + case $arg in + pkg_list+='urllib3~=1.26.15 ' + [[ -f /tmp/.toxenv ]] + [[ ! -f /tmp/.toxenv ]] + [[ -n '' ]] + python3 -m venv /tmp/venv-T2os + echo 'lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-T2os' lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-T2os + echo /tmp/venv-T2os + echo 'lf-activate-venv(): INFO: Save venv in file: /tmp/.toxenv' lf-activate-venv(): INFO: Save venv in file: /tmp/.toxenv + /tmp/venv-T2os/bin/python3 -m pip install --upgrade --quiet pip virtualenv + [[ -z tox virtualenv urllib3~=1.26.15 ]] + echo 'lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 ' lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 + /tmp/venv-T2os/bin/python3 -m pip install --upgrade --quiet --upgrade-strategy eager tox virtualenv urllib3~=1.26.15 + type python3 + true + echo 'lf-activate-venv(): INFO: Adding /tmp/venv-T2os/bin to PATH' lf-activate-venv(): INFO: Adding /tmp/venv-T2os/bin to PATH + PATH=/tmp/venv-T2os/bin:/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin + return 0 + python3 --version Python 3.11.7 + python3 -m pip --version pip 24.2 from /tmp/venv-T2os/lib/python3.11/site-packages/pip (python 3.11) + python3 -m pip freeze cachetools==5.5.0 chardet==5.2.0 colorama==0.4.6 distlib==0.3.8 filelock==3.16.1 packaging==24.1 platformdirs==4.3.6 pluggy==1.5.0 pyproject-api==1.8.0 tox==4.20.0 urllib3==1.26.20 virtualenv==20.26.5 [transportpce-tox-verify-transportpce-master] $ /bin/sh -xe /tmp/jenkins8361777922758433048.sh [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content PARALLEL=True [EnvInject] - Variables injected successfully. [transportpce-tox-verify-transportpce-master] $ /bin/bash -l /tmp/jenkins610744046380759036.sh ---> tox-run.sh + PATH=/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin + ARCHIVE_TOX_DIR=/w/workspace/transportpce-tox-verify-transportpce-master/archives/tox + ARCHIVE_DOC_DIR=/w/workspace/transportpce-tox-verify-transportpce-master/archives/docs + mkdir -p /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox + cd /w/workspace/transportpce-tox-verify-transportpce-master/. + source /home/jenkins/lf-env.sh + lf-activate-venv --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 ++ mktemp -d /tmp/venv-XXXX + lf_venv=/tmp/venv-CctJ + local venv_file=/tmp/.os_lf_venv + local python=python3 + local options + local set_path=true + local install_args= ++ getopt -o np:v: -l no-path,system-site-packages,python:,venv-file: -n lf-activate-venv -- --venv-file /tmp/.toxenv tox virtualenv urllib3~=1.26.15 + options=' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' + eval set -- ' --venv-file '\''/tmp/.toxenv'\'' -- '\''tox'\'' '\''virtualenv'\'' '\''urllib3~=1.26.15'\''' ++ set -- --venv-file /tmp/.toxenv -- tox virtualenv urllib3~=1.26.15 + true + case $1 in + venv_file=/tmp/.toxenv + shift 2 + true + case $1 in + shift + break + case $python in + local pkg_list= + [[ -d /opt/pyenv ]] + echo 'Setup pyenv:' Setup pyenv: + export PYENV_ROOT=/opt/pyenv + PYENV_ROOT=/opt/pyenv + export PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin + PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin + pyenv versions system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) + command -v pyenv ++ pyenv init - --no-rehash + eval 'PATH="$(bash --norc -ec '\''IFS=:; paths=($PATH); for i in ${!paths[@]}; do if [[ ${paths[i]} == "'\'''\''/opt/pyenv/shims'\'''\''" ]]; then unset '\''\'\'''\''paths[i]'\''\'\'''\''; fi; done; echo "${paths[*]}"'\'')" export PATH="/opt/pyenv/shims:${PATH}" export PYENV_SHELL=bash source '\''/opt/pyenv/libexec/../completions/pyenv.bash'\'' pyenv() { local command command="${1:-}" if [ "$#" -gt 0 ]; then shift fi case "$command" in rehash|shell) eval "$(pyenv "sh-$command" "$@")" ;; *) command pyenv "$command" "$@" ;; esac }' +++ bash --norc -ec 'IFS=:; paths=($PATH); for i in ${!paths[@]}; do if [[ ${paths[i]} == "/opt/pyenv/shims" ]]; then unset '\''paths[i]'\''; fi; done; echo "${paths[*]}"' ++ PATH=/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin ++ export PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin ++ PATH=/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin ++ export PYENV_SHELL=bash ++ PYENV_SHELL=bash ++ source /opt/pyenv/libexec/../completions/pyenv.bash +++ complete -F _pyenv pyenv ++ lf-pyver python3 ++ local py_version_xy=python3 ++ local py_version_xyz= ++ sed 's/^[ *]* //' ++ awk '{ print $1 }' ++ pyenv versions ++ local command ++ command=versions ++ '[' 1 -gt 0 ']' ++ shift ++ case "$command" in ++ command pyenv versions ++ pyenv versions ++ grep -E '^[0-9.]*[0-9]$' ++ [[ ! -s /tmp/.pyenv_versions ]] +++ grep '^3' /tmp/.pyenv_versions +++ sort -V +++ tail -n 1 ++ py_version_xyz=3.11.7 ++ [[ -z 3.11.7 ]] ++ echo 3.11.7 ++ return 0 + pyenv local 3.11.7 + local command + command=local + '[' 2 -gt 0 ']' + shift + case "$command" in + command pyenv local 3.11.7 + pyenv local 3.11.7 + for arg in "$@" + case $arg in + pkg_list+='tox ' + for arg in "$@" + case $arg in + pkg_list+='virtualenv ' + for arg in "$@" + case $arg in + pkg_list+='urllib3~=1.26.15 ' + [[ -f /tmp/.toxenv ]] ++ cat /tmp/.toxenv + lf_venv=/tmp/venv-T2os + echo 'lf-activate-venv(): INFO: Reuse venv:/tmp/venv-T2os from' file:/tmp/.toxenv lf-activate-venv(): INFO: Reuse venv:/tmp/venv-T2os from file:/tmp/.toxenv + /tmp/venv-T2os/bin/python3 -m pip install --upgrade --quiet pip virtualenv + [[ -z tox virtualenv urllib3~=1.26.15 ]] + echo 'lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 ' lf-activate-venv(): INFO: Installing: tox virtualenv urllib3~=1.26.15 + /tmp/venv-T2os/bin/python3 -m pip install --upgrade --quiet --upgrade-strategy eager tox virtualenv urllib3~=1.26.15 + type python3 + true + echo 'lf-activate-venv(): INFO: Adding /tmp/venv-T2os/bin to PATH' lf-activate-venv(): INFO: Adding /tmp/venv-T2os/bin to PATH + PATH=/tmp/venv-T2os/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin + return 0 + [[ -d /opt/pyenv ]] + echo '---> Setting up pyenv' ---> Setting up pyenv + export PYENV_ROOT=/opt/pyenv + PYENV_ROOT=/opt/pyenv + export PATH=/opt/pyenv/bin:/tmp/venv-T2os/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin + PATH=/opt/pyenv/bin:/tmp/venv-T2os/bin:/opt/pyenv/shims:/opt/pyenv/bin:/home/jenkins/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin ++ pwd + PYTHONPATH=/w/workspace/transportpce-tox-verify-transportpce-master + export PYTHONPATH + export TOX_TESTENV_PASSENV=PYTHONPATH + TOX_TESTENV_PASSENV=PYTHONPATH + tox --version 4.20.0 from /tmp/venv-T2os/lib/python3.11/site-packages/tox/__init__.py + PARALLEL=True + TOX_OPTIONS_LIST= + [[ -n '' ]] + case ${PARALLEL,,} in + TOX_OPTIONS_LIST=' --parallel auto --parallel-live' + tox --parallel auto --parallel-live + tee -a /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tox.log docs: install_deps> python -I -m pip install -r docs/requirements.txt checkbashisms: freeze> python -m pip freeze --all buildcontroller: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt docs-linkcheck: install_deps> python -I -m pip install -r docs/requirements.txt checkbashisms: pip==24.2,setuptools==75.1.0,wheel==0.44.0 checkbashisms: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./fixCIcentOS8reposMirrors.sh checkbashisms: commands[1] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sh -c 'command checkbashisms>/dev/null || sudo yum install -y devscripts-checkbashisms || sudo yum install -y devscripts-minimal || sudo yum install -y devscripts || sudo yum install -y https://archives.fedoraproject.org/pub/archive/fedora/linux/releases/31/Everything/x86_64/os/Packages/d/devscripts-checkbashisms-2.19.6-2.fc31.x86_64.rpm || (echo "checkbashisms command not found - please install it (e.g. sudo apt-get install devscripts | yum install devscripts-minimal )" >&2 && exit 1)' checkbashisms: commands[2] /w/workspace/transportpce-tox-verify-transportpce-master/tests> find . -not -path '*/\.*' -name '*.sh' -exec checkbashisms -f '{}' + script ./reflectwarn.sh does not appear to have a #! interpreter line; you may get strange results checkbashisms: OK ✔ in 3.31 seconds pre-commit: install_deps> python -I -m pip install pre-commit pre-commit: freeze> python -m pip freeze --all pre-commit: cfgv==3.4.0,distlib==0.3.8,filelock==3.16.1,identify==2.6.1,nodeenv==1.9.1,pip==24.2,platformdirs==4.3.6,pre-commit==3.8.0,PyYAML==6.0.2,setuptools==75.1.0,virtualenv==20.26.5,wheel==0.44.0 pre-commit: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./fixCIcentOS8reposMirrors.sh pre-commit: commands[1] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sh -c 'which cpan || sudo yum install -y perl-CPAN || (echo "cpan command not found - please install it (e.g. sudo apt-get install perl-modules | yum install perl-CPAN )" >&2 && exit 1)' /usr/bin/cpan pre-commit: commands[2] /w/workspace/transportpce-tox-verify-transportpce-master/tests> pre-commit run --all-files --show-diff-on-failure [INFO] Initializing environment for https://github.com/pre-commit/pre-commit-hooks. [INFO] Initializing environment for https://github.com/jorisroovers/gitlint. [INFO] Initializing environment for https://github.com/jorisroovers/gitlint:./gitlint-core[trusted-deps]. [INFO] Initializing environment for https://github.com/Lucas-C/pre-commit-hooks. buildcontroller: freeze> python -m pip freeze --all buildcontroller: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 buildcontroller: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_controller.sh + update-java-alternatives -l java-1.11.0-openjdk-amd64 1111 /usr/lib/jvm/java-1.11.0-openjdk-amd64 java-1.12.0-openjdk-amd64 1211 /usr/lib/jvm/java-1.12.0-openjdk-amd64 java-1.17.0-openjdk-amd64 1711 /usr/lib/jvm/java-1.17.0-openjdk-amd64 java-1.21.0-openjdk-amd64 2111 /usr/lib/jvm/java-1.21.0-openjdk-amd64 java-1.8.0-openjdk-amd64 1081 /usr/lib/jvm/java-1.8.0-openjdk-amd64 + sudo update-java-alternatives -s java-1.21.0-openjdk-amd64 [INFO] Initializing environment for https://github.com/pre-commit/mirrors-autopep8. [INFO] Initializing environment for https://github.com/perltidy/perltidy. + + java -version sed -n ;s/.* version "\(.*\)\.\(.*\)\..*".*$/\1/p; [INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... + JAVA_VER=21 + echo 21 21 + sed -n ;s/javac \(.*\)\.\(.*\)\..*.*$/\1/p; + javac -version + JAVAC_VER=21 21 ok, java is 21 or newer + echo 21 + [ 21 -ge 21 ] + [ 21 -ge 21 ] + echo ok, java is 21 or newer + wget -nv https://dlcdn.apache.org/maven/maven-3/3.9.8/binaries/apache-maven-3.9.8-bin.tar.gz -P /tmp 2024-09-26 09:00:12 URL:https://dlcdn.apache.org/maven/maven-3/3.9.8/binaries/apache-maven-3.9.8-bin.tar.gz [9083702/9083702] -> "/tmp/apache-maven-3.9.8-bin.tar.gz" [1] + sudo mkdir -p /opt + sudo tar xf /tmp/apache-maven-3.9.8-bin.tar.gz -C /opt + sudo ln -s /opt/apache-maven-3.9.8 /opt/maven + sudo ln -s /opt/maven/bin/mvn /usr/bin/mvn + mvn --version Apache Maven 3.9.8 (36645f6c9b5079805ea5009217e36f2cffd34256) Maven home: /opt/maven Java version: 21.0.4, vendor: Ubuntu, runtime: /usr/lib/jvm/java-21-openjdk-amd64 Default locale: en, platform encoding: UTF-8 OS name: "linux", version: "5.4.0-190-generic", arch: "amd64", family: "unix" NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.nio.charset=ALL-UNNAMED --add-opens=java.base/java.nio.file=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.jar=ALL-UNNAMED --add-opens=java.base/java.util.stream=ALL-UNNAMED --add-opens=java.base/java.util.zip=ALL-UNNAMED --add-opens java.base/sun.nio.ch=ALL-UNNAMED --add-opens java.base/sun.nio.fs=ALL-UNNAMED -Xlog:disable [INFO] Installing environment for https://github.com/Lucas-C/pre-commit-hooks. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... [INFO] Installing environment for https://github.com/pre-commit/mirrors-autopep8. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... [INFO] Installing environment for https://github.com/perltidy/perltidy. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... trim trailing whitespace.................................................docs: freeze> python -m pip freeze --all Passed Tabs remover.............................................................Passed autopep8.................................................................docs: alabaster==0.7.16,attrs==24.2.0,babel==2.16.0,blockdiag==3.0.0,certifi==2024.8.30,charset-normalizer==3.3.2,contourpy==1.3.0,cycler==0.12.1,docutils==0.20.1,fonttools==4.54.1,funcparserlib==2.0.0a0,future==1.0.0,idna==3.10,imagesize==1.4.1,Jinja2==3.1.4,jsonschema==3.2.0,kiwisolver==1.4.7,lfdocs-conf==0.9.0,MarkupSafe==2.1.5,matplotlib==3.9.2,numpy==2.1.1,nwdiag==3.0.0,packaging==24.1,pillow==10.4.0,pip==24.2,Pygments==2.18.0,pyparsing==3.1.4,pyrsistent==0.20.0,python-dateutil==2.9.0.post0,PyYAML==6.0.2,requests==2.32.3,requests-file==1.5.1,seqdiag==3.0.0,setuptools==75.1.0,six==1.16.0,snowballstemmer==2.2.0,Sphinx==7.4.7,sphinx-bootstrap-theme==0.8.1,sphinx-data-viewer==0.1.5,sphinx-rtd-theme==2.0.0,sphinx-tabs==3.4.5,sphinxcontrib-applehelp==2.0.0,sphinxcontrib-blockdiag==3.0.0,sphinxcontrib-devhelp==2.0.0,sphinxcontrib-htmlhelp==2.1.0,sphinxcontrib-jquery==4.1,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-needs==0.7.9,sphinxcontrib-nwdiag==2.0.0,sphinxcontrib-plantuml==0.30,sphinxcontrib-qthelp==2.0.0,sphinxcontrib-seqdiag==3.0.0,sphinxcontrib-serializinghtml==2.0.0,sphinxcontrib-swaggerdoc==0.1.7,urllib3==2.2.3,webcolors==24.8.0,wheel==0.44.0 docs: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sphinx-build -q -W --keep-going -b html -n -d /w/workspace/transportpce-tox-verify-transportpce-master/.tox/docs/tmp/doctrees ../docs/ /w/workspace/transportpce-tox-verify-transportpce-master/docs/_build/html docs-linkcheck: freeze> python -m pip freeze --all docs-linkcheck: alabaster==0.7.16,attrs==24.2.0,babel==2.16.0,blockdiag==3.0.0,certifi==2024.8.30,charset-normalizer==3.3.2,contourpy==1.3.0,cycler==0.12.1,docutils==0.20.1,fonttools==4.54.1,funcparserlib==2.0.0a0,future==1.0.0,idna==3.10,imagesize==1.4.1,Jinja2==3.1.4,jsonschema==3.2.0,kiwisolver==1.4.7,lfdocs-conf==0.9.0,MarkupSafe==2.1.5,matplotlib==3.9.2,numpy==2.1.1,nwdiag==3.0.0,packaging==24.1,pillow==10.4.0,pip==24.2,Pygments==2.18.0,pyparsing==3.1.4,pyrsistent==0.20.0,python-dateutil==2.9.0.post0,PyYAML==6.0.2,requests==2.32.3,requests-file==1.5.1,seqdiag==3.0.0,setuptools==75.1.0,six==1.16.0,snowballstemmer==2.2.0,Sphinx==7.4.7,sphinx-bootstrap-theme==0.8.1,sphinx-data-viewer==0.1.5,sphinx-rtd-theme==2.0.0,sphinx-tabs==3.4.5,sphinxcontrib-applehelp==2.0.0,sphinxcontrib-blockdiag==3.0.0,sphinxcontrib-devhelp==2.0.0,sphinxcontrib-htmlhelp==2.1.0,sphinxcontrib-jquery==4.1,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-needs==0.7.9,sphinxcontrib-nwdiag==2.0.0,sphinxcontrib-plantuml==0.30,sphinxcontrib-qthelp==2.0.0,sphinxcontrib-seqdiag==3.0.0,sphinxcontrib-serializinghtml==2.0.0,sphinxcontrib-swaggerdoc==0.1.7,urllib3==2.2.3,webcolors==24.8.0,wheel==0.44.0 docs-linkcheck: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> sphinx-build -q -b linkcheck -d /w/workspace/transportpce-tox-verify-transportpce-master/.tox/docs-linkcheck/tmp/doctrees ../docs/ /w/workspace/transportpce-tox-verify-transportpce-master/docs/_build/linkcheck /w/workspace/transportpce-tox-verify-transportpce-master/.tox/docs-linkcheck/lib/python3.11/site-packages/sphinx/builders/linkcheck.py:86: RemovedInSphinx80Warning: The default value for 'linkcheck_report_timeouts_as_broken' will change to False in Sphinx 8, meaning that request timeouts will be reported with a new 'timeout' status, instead of as 'broken'. This is intended to provide more detail as to the failure mode. See https://github.com/sphinx-doc/sphinx/issues/11868 for details. warnings.warn(deprecation_msg, RemovedInSphinx80Warning, stacklevel=1) Passed docs: OK ✔ in 47.09 seconds pylint: install_deps> python -I -m pip install 'pylint>=2.6.0' perltidy.................................................................Passed pre-commit: commands[3] /w/workspace/transportpce-tox-verify-transportpce-master/tests> pre-commit run gitlint-ci --hook-stage manual [INFO] Installing environment for https://github.com/jorisroovers/gitlint. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... docs-linkcheck: OK ✔ in 50.11 seconds pylint: freeze> python -m pip freeze --all pylint: astroid==3.3.4,dill==0.3.8,isort==5.13.2,mccabe==0.7.0,pip==24.2,platformdirs==4.3.6,pylint==3.3.1,setuptools==75.1.0,tomlkit==0.13.2,wheel==0.44.0 pylint: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> find transportpce_tests/ -name '*.py' -exec pylint --fail-under=10 --max-line-length=120 --disable=missing-docstring,import-error --disable=fixme --disable=duplicate-code '--module-rgx=([a-z0-9_]+$)|([0-9.]{1,30}$)' '--method-rgx=(([a-z_][a-zA-Z0-9_]{2,})|(_[a-z0-9_]*)|(__[a-zA-Z][a-zA-Z0-9_]+__))$' '--variable-rgx=[a-zA-Z_][a-zA-Z0-9_]{1,30}$' '{}' + gitlint..................................................................Passed ------------------------------------ Your code has been rated at 10.00/10 pre-commit: OK ✔ in 53.02 seconds pylint: OK ✔ in 26.78 seconds buildcontroller: OK ✔ in 2 minutes 12.36 seconds testsPCE: install_deps> python -I -m pip install gnpy4tpce==2.4.7 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt build_karaf_tests221: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt build_karaf_tests121: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt sims: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt build_karaf_tests221: freeze> python -m pip freeze --all sims: freeze> python -m pip freeze --all build_karaf_tests121: freeze> python -m pip freeze --all build_karaf_tests221: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 build_karaf_tests221: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.nio.charset=ALL-UNNAMED --add-opens=java.base/java.nio.file=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.jar=ALL-UNNAMED --add-opens=java.base/java.util.stream=ALL-UNNAMED --add-opens=java.base/java.util.zip=ALL-UNNAMED --add-opens java.base/sun.nio.ch=ALL-UNNAMED --add-opens java.base/sun.nio.fs=ALL-UNNAMED -Xlog:disable build_karaf_tests121: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 build_karaf_tests121: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh sims: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 sims: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./install_lightynode.sh Using lighynode version 20.1.0.2 Installing lightynode device to ./lightynode/lightynode-openroadm-device directory NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.nio.charset=ALL-UNNAMED --add-opens=java.base/java.nio.file=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.jar=ALL-UNNAMED --add-opens=java.base/java.util.stream=ALL-UNNAMED --add-opens=java.base/java.util.zip=ALL-UNNAMED --add-opens java.base/sun.nio.ch=ALL-UNNAMED --add-opens java.base/sun.nio.fs=ALL-UNNAMED -Xlog:disable sims: OK ✔ in 13.25 seconds build_karaf_tests71: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt build_karaf_tests71: freeze> python -m pip freeze --all build_karaf_tests71: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 build_karaf_tests71: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.nio.charset=ALL-UNNAMED --add-opens=java.base/java.nio.file=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.jar=ALL-UNNAMED --add-opens=java.base/java.util.stream=ALL-UNNAMED --add-opens=java.base/java.util.zip=ALL-UNNAMED --add-opens java.base/sun.nio.ch=ALL-UNNAMED --add-opens java.base/sun.nio.fs=ALL-UNNAMED -Xlog:disable build_karaf_tests221: OK ✔ in 1 minute 4.21 seconds build_karaf_tests_hybrid: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt build_karaf_tests121: OK ✔ in 1 minute 5.94 seconds tests_tapi: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt build_karaf_tests_hybrid: freeze> python -m pip freeze --all build_karaf_tests_hybrid: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 build_karaf_tests_hybrid: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./build_karaf_for_tests.sh tests_tapi: freeze> python -m pip freeze --all NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.nio.charset=ALL-UNNAMED --add-opens=java.base/java.nio.file=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.jar=ALL-UNNAMED --add-opens=java.base/java.util.stream=ALL-UNNAMED --add-opens=java.base/java.util.zip=ALL-UNNAMED --add-opens java.base/sun.nio.ch=ALL-UNNAMED --add-opens java.base/sun.nio.fs=ALL-UNNAMED -Xlog:disable tests_tapi: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 tests_tapi: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh tapi using environment variables from ./karaf221.env pytest -q transportpce_tests/tapi/test01_abstracted_topology.py build_karaf_tests71: OK ✔ in 1 minute 26.33 seconds testsPCE: freeze> python -m pip freeze --all testsPCE: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,click==8.1.7,contourpy==1.3.0,cryptography==3.3.2,cycler==0.12.1,dict2xml==1.7.6,Flask==2.1.3,Flask-Injector==0.14.0,fonttools==4.54.1,gnpy4tpce==2.4.7,idna==3.10,iniconfig==2.0.0,injector==0.22.0,itsdangerous==2.2.0,Jinja2==3.1.4,kiwisolver==1.4.7,lxml==5.3.0,MarkupSafe==2.1.5,matplotlib==3.9.2,netconf-client==3.1.1,networkx==2.8.8,numpy==1.26.4,packaging==24.1,pandas==1.5.3,paramiko==3.5.0,pbr==5.11.1,pillow==10.4.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pyparsing==3.1.4,pytest==8.3.3,python-dateutil==2.9.0.post0,pytz==2024.2,requests==2.32.3,scipy==1.14.1,setuptools==50.3.2,six==1.16.0,urllib3==2.2.3,Werkzeug==2.0.3,wheel==0.44.0,xlrd==1.2.0 testsPCE: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh pce pytest -q transportpce_tests/pce/test01_pce.py .................................................. [100%] 20 passed in 131.45s (0:02:11) pytest -q transportpce_tests/pce/test02_pce_400G.py .............. [100%] 9 passed in 40.36s pytest -q transportpce_tests/pce/test03_gnpy.py ........... [100%] 8 passed in 38.09s pytest -q transportpce_tests/pce/test04_pce_bug_fix.py ............ [100%] 3 passed in 35.81s build_karaf_tests_hybrid: OK ✔ in 1 minute 9.46 seconds testsPCE: OK ✔ in 6 minutes 14.77 seconds tests121: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt tests121: freeze> python -m pip freeze --all tests121: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 tests121: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 1.2.1 using environment variables from ./karaf121.env pytest -q transportpce_tests/1.2.1/test01_portmapping.py ... [100%] 50 passed in 446.84s (0:07:26) pytest -q transportpce_tests/tapi/test02_full_topology.py ..FF.FFF.........F......................F [100%] =================================== FAILURES =================================== _____________ TransportPCEtesting.test_12_check_openroadm_topology _____________ self = def test_12_check_openroadm_topology(self): response = test_utils.get_ietf_network_request('openroadm-topology', 'config') self.assertEqual(response['status_code'], requests.codes.ok) > self.assertEqual(len(response['network'][0]['node']), 13, 'There should be 13 openroadm nodes') E AssertionError: 14 != 13 : There should be 13 openroadm nodes transportpce_tests/tapi/test02_full_topology.py:272: AssertionError =========================== short test summary info ============================ FAILED transportpce_tests/tapi/test02_full_topology.py::TransportPCEtesting::test_12_check_openroadm_topology 1 failed, 29 passed in 271.18s (0:04:31) Ftests_tapi: exit 1 (718.59 seconds) /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh tapi pid=31077 tests_tapi: FAIL ✖ in 12 minutes 12.6 seconds tests71: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt FFFFtests71: freeze> python -m pip freeze --all tests71: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 tests71: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 7.1 using environment variables from ./karaf71.env pytest -q transportpce_tests/7.1/test01_portmapping.py FFFFF [100%] =================================== FAILURES =================================== _________ TransportPCEPortMappingTesting.test_02_rdm_device_connected __________ self = def test_02_rdm_device_connected(self): response = test_utils.check_device_connection("ROADMA01") > self.assertEqual(response['status_code'], requests.codes.ok) E AssertionError: 409 != 200 transportpce_tests/1.2.1/test01_portmapping.py:54: AssertionError ----------------------------- Captured stdout call ----------------------------- execution of test_02_rdm_device_connected _________ TransportPCEPortMappingTesting.test_03_rdm_portmapping_info __________ self = def test_03_rdm_portmapping_info(self): response = test_utils.get_portmapping_node_attr("ROADMA01", "node-info", None) > self.assertEqual(response['status_code'], requests.codes.ok) E AssertionError: 409 != 200 transportpce_tests/1.2.1/test01_portmapping.py:60: AssertionError ----------------------------- Captured stdout call ----------------------------- execution of test_03_rdm_portmapping_info _____ TransportPCEPortMappingTesting.test_04_rdm_portmapping_DEG1_TTP_TXRX _____ self = def test_04_rdm_portmapping_DEG1_TTP_TXRX(self): response = test_utils.get_portmapping_node_attr("ROADMA01", "mapping", "DEG1-TTP-TXRX") > self.assertEqual(response['status_code'], requests.codes.ok) E AssertionError: 409 != 200 transportpce_tests/1.2.1/test01_portmapping.py:73: AssertionError ----------------------------- Captured stdout call ----------------------------- execution of test_04_rdm_portmapping_DEG1_TTP_TXRX _____ TransportPCEPortMappingTesting.test_05_rdm_portmapping_SRG1_PP7_TXRX _____ self = def test_05_rdm_portmapping_SRG1_PP7_TXRX(self): response = test_utils.get_portmapping_node_attr("ROADMA01", "mapping", "SRG1-PP7-TXRX") > self.assertEqual(response['status_code'], requests.codes.ok) E AssertionError: 409 != 200 transportpce_tests/1.2.1/test01_portmapping.py:82: AssertionError ----------------------------- Captured stdout call ----------------------------- execution of test_05_rdm_portmapping_SRG1_PP7_TXRX _____ TransportPCEPortMappingTesting.test_06_rdm_portmapping_SRG3_PP1_TXRX _____ self = def test_06_rdm_portmapping_SRG3_PP1_TXRX(self): response = test_utils.get_portmapping_node_attr("ROADMA01", "mapping", "SRG3-PP1-TXRX") > self.assertEqual(response['status_code'], requests.codes.ok) E AssertionError: 409 != 200 transportpce_tests/1.2.1/test01_portmapping.py:91: AssertionError ----------------------------- Captured stdout call ----------------------------- execution of test_06_rdm_portmapping_SRG3_PP1_TXRX _______ TransportPCEPortMappingTesting.test_11_xpdr_portmapping_NETWORK2 _______ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: > sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection raise err _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ address = ('localhost', 8182), timeout = 10, source_address = None socket_options = [(6, 1, 1)] def create_connection( address: tuple[str, int], timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, source_address: tuple[str, int] | None = None, socket_options: _TYPE_SOCKET_OPTIONS | None = None, ) -> socket.socket: """Connect to *address* and return the socket object. Convenience function. Connect to *address* (a 2-tuple ``(host, port)``) and return the socket object. Passing the optional *timeout* parameter will set the timeout on the socket instance before attempting to connect. If no *timeout* is supplied, the global default timeout setting returned by :func:`socket.getdefaulttimeout` is used. If *source_address* is set it must be a tuple of (host, port) for the socket to bind as a source address before making the connection. An host of '' or port 0 tells the OS to use the default. """ host, port = address if host.startswith("["): host = host.strip("[]") err = None # Using the value from allowed_gai_family() in the context of getaddrinfo lets # us select whether to work with IPv4 DNS records, IPv6 records, or both. # The original create_connection function always returns all records. family = allowed_gai_family() try: host.encode("idna") except UnicodeError: raise LocationParseError(f"'{host}', label empty or too long") from None for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): af, socktype, proto, canonname, sa = res sock = None try: sock = socket.socket(af, socktype, proto) # If provided, set socket level options before connecting. _set_socket_options(sock, socket_options) if timeout is not _DEFAULT_TIMEOUT: sock.settimeout(timeout) if source_address: sock.bind(source_address) > sock.connect(sa) E ConnectionRefusedError: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError The above exception was the direct cause of the following exception: self = method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2' body = None headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) redirect = False, assert_same_host = False timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None release_conn = False, chunked = False, body_pos = None, preload_content = False decode_content = False, response_kw = {} parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2', query=None, fragment=None) destination_scheme = None, conn = None, release_this_conn = True http_tunnel_required = False, err = None, clean_exit = False def urlopen( # type: ignore[override] self, method: str, url: str, body: _TYPE_BODY | None = None, headers: typing.Mapping[str, str] | None = None, retries: Retry | bool | int | None = None, redirect: bool = True, assert_same_host: bool = True, timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, pool_timeout: int | None = None, release_conn: bool | None = None, chunked: bool = False, body_pos: _TYPE_BODY_POSITION | None = None, preload_content: bool = True, decode_content: bool = True, **response_kw: typing.Any, ) -> BaseHTTPResponse: """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param url: The URL to perform the request on. :param body: Data to send in the request body, either :class:`str`, :class:`bytes`, an iterable of :class:`str`/:class:`bytes`, or a file-like object. :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When ``False``, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param bool preload_content: If True, the response's body will be preloaded into memory. :param bool decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``preload_content`` which defaults to ``True``. :param bool chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. """ parsed_url = parse_url(url) destination_scheme = parsed_url.scheme if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = preload_content # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) # Ensure that the URL we're connecting to is properly encoded if url.startswith("/"): url = to_str(_encode_target(url)) else: url = to_str(parsed_url.url) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn http_tunnel_required = connection_requires_http_tunnel( self.proxy, self.proxy_config, destination_scheme ) # Merge the proxy headers. Only done when not using HTTP CONNECT. We # have to copy the headers dict so we can safely change it without those # changes being reflected in anyone else's copy. if not http_tunnel_required: headers = headers.copy() # type: ignore[attr-defined] headers.update(self.proxy_headers) # type: ignore[union-attr] # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] # Is this a closed/new connection that requires CONNECT tunnelling? if self.proxy is not None and http_tunnel_required and conn.is_closed: try: self._prepare_proxy(conn) except (BaseSSLError, OSError, SocketTimeout) as e: self._raise_timeout( err=e, url=self.proxy.url, timeout_value=conn.timeout ) raise # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Make the request on the HTTPConnection object > response = self._make_request( conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked, retries=retries, response_conn=response_conn, preload_content=preload_content, decode_content=decode_content, **response_kw, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request conn.request( ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request self.endheaders() /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders self._send_output(message_body, encode_chunked=encode_chunked) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output self.send(msg) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send self.connect() ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect self.sock = self._new_conn() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) except socket.gaierror as e: raise NameResolutionError(self.host, self, e) from e except SocketTimeout as e: raise ConnectTimeoutError( self, f"Connection to {self.host} timed out. (connect timeout={self.timeout})", ) from e except OSError as e: > raise NewConnectionError( self, f"Failed to establish a new connection: {e}" ) from e E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError The above exception was the direct cause of the following exception: self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: > resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen retries = retries.increment( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Retry(total=0, connect=None, read=False, redirect=None, status=None) method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2' response = None error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') _pool = _stacktrace = def increment( self, method: str | None = None, url: str | None = None, response: BaseHTTPResponse | None = None, error: Exception | None = None, _pool: ConnectionPool | None = None, _stacktrace: TracebackType | None = None, ) -> Self: """Return a new Retry object with incremented retry counters. :param response: A response object, or None, if the server did not return a response. :type response: :class:`~urllib3.response.BaseHTTPResponse` :param Exception error: An error encountered during the request, or None if the response was received successfully. :return: A new ``Retry`` object. """ if self.total is False and error: # Disabled, indicate to re-raise the error. raise reraise(type(error), error, _stacktrace) total = self.total if total is not None: total -= 1 connect = self.connect read = self.read redirect = self.redirect status_count = self.status other = self.other cause = "unknown" status = None redirect_location = None if error and self._is_connection_error(error): # Connect retry? if connect is False: raise reraise(type(error), error, _stacktrace) elif connect is not None: connect -= 1 elif error and self._is_read_error(error): # Read retry? if read is False or method is None or not self._is_method_retryable(method): raise reraise(type(error), error, _stacktrace) elif read is not None: read -= 1 elif error: # Other retry? if other is not None: other -= 1 elif response and response.get_redirect_location(): # Redirect retry? if redirect is not None: redirect -= 1 cause = "too many redirects" response_redirect_location = response.get_redirect_location() if response_redirect_location: redirect_location = response_redirect_location status = response.status else: # Incrementing because of a server error like a 500 in # status_forcelist and the given method is in the allowed_methods cause = ResponseError.GENERIC_ERROR if response and response.status: if status_count is not None: status_count -= 1 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) status = response.status history = self.history + ( RequestHistory(method, url, error, status, redirect_location), ) new_retry = self.new( total=total, connect=connect, read=read, redirect=redirect, status=status_count, other=other, history=history, ) if new_retry.is_exhausted(): reason = error or ResponseError(cause) > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError During handling of the above exception, another exception occurred: self = def test_11_xpdr_portmapping_NETWORK2(self): > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-NETWORK2") transportpce_tests/1.2.1/test01_portmapping.py:133: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ transportpce_tests/common/test_utils.py:470: in get_portmapping_node_attr response = get_request(target_url) transportpce_tests/common/test_utils.py:116: in get_request return requests.request( ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request return session.request(method=method, url=url, **kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request resp = self.send(prep, **send_kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send r = adapter.send(request, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) except (ProtocolError, OSError) as err: raise ConnectionError(err, request=request) except MaxRetryError as e: if isinstance(e.reason, ConnectTimeoutError): # TODO: Remove this in 3.0.0: see #2811 if not isinstance(e.reason, NewConnectionError): raise ConnectTimeout(e, request=request) if isinstance(e.reason, ResponseError): raise RetryError(e, request=request) if isinstance(e.reason, _ProxyError): raise ProxyError(e, request=request) if isinstance(e.reason, _SSLError): # This branch is for urllib3 v1.22 and later. raise SSLError(e, request=request) > raise ConnectionError(e, request=request) E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-NETWORK2 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError ----------------------------- Captured stdout call ----------------------------- execution of test_11_xpdr_portmapping_NETWORK2 _______ TransportPCEPortMappingTesting.test_12_xpdr_portmapping_CLIENT1 ________ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: > sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection raise err _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ address = ('localhost', 8182), timeout = 10, source_address = None socket_options = [(6, 1, 1)] def create_connection( address: tuple[str, int], timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, source_address: tuple[str, int] | None = None, socket_options: _TYPE_SOCKET_OPTIONS | None = None, ) -> socket.socket: """Connect to *address* and return the socket object. Convenience function. Connect to *address* (a 2-tuple ``(host, port)``) and return the socket object. Passing the optional *timeout* parameter will set the timeout on the socket instance before attempting to connect. If no *timeout* is supplied, the global default timeout setting returned by :func:`socket.getdefaulttimeout` is used. If *source_address* is set it must be a tuple of (host, port) for the socket to bind as a source address before making the connection. An host of '' or port 0 tells the OS to use the default. """ host, port = address if host.startswith("["): host = host.strip("[]") err = None # Using the value from allowed_gai_family() in the context of getaddrinfo lets # us select whether to work with IPv4 DNS records, IPv6 records, or both. # The original create_connection function always returns all records. family = allowed_gai_family() try: host.encode("idna") except UnicodeError: raise LocationParseError(f"'{host}', label empty or too long") from None for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): af, socktype, proto, canonname, sa = res sock = None try: sock = socket.socket(af, socktype, proto) # If provided, set socket level options before connecting. _set_socket_options(sock, socket_options) if timeout is not _DEFAULT_TIMEOUT: sock.settimeout(timeout) if source_address: sock.bind(source_address) > sock.connect(sa) E ConnectionRefusedError: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError The above exception was the direct cause of the following exception: self = method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1' body = None headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) redirect = False, assert_same_host = False timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None release_conn = False, chunked = False, body_pos = None, preload_content = False decode_content = False, response_kw = {} parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1', query=None, fragment=None) destination_scheme = None, conn = None, release_this_conn = True http_tunnel_required = False, err = None, clean_exit = False def urlopen( # type: ignore[override] self, method: str, url: str, body: _TYPE_BODY | None = None, headers: typing.Mapping[str, str] | None = None, retries: Retry | bool | int | None = None, redirect: bool = True, assert_same_host: bool = True, timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, pool_timeout: int | None = None, release_conn: bool | None = None, chunked: bool = False, body_pos: _TYPE_BODY_POSITION | None = None, preload_content: bool = True, decode_content: bool = True, **response_kw: typing.Any, ) -> BaseHTTPResponse: """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param url: The URL to perform the request on. :param body: Data to send in the request body, either :class:`str`, :class:`bytes`, an iterable of :class:`str`/:class:`bytes`, or a file-like object. :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When ``False``, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param bool preload_content: If True, the response's body will be preloaded into memory. :param bool decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``preload_content`` which defaults to ``True``. :param bool chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. """ parsed_url = parse_url(url) destination_scheme = parsed_url.scheme if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = preload_content # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) # Ensure that the URL we're connecting to is properly encoded if url.startswith("/"): url = to_str(_encode_target(url)) else: url = to_str(parsed_url.url) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn http_tunnel_required = connection_requires_http_tunnel( self.proxy, self.proxy_config, destination_scheme ) # Merge the proxy headers. Only done when not using HTTP CONNECT. We # have to copy the headers dict so we can safely change it without those # changes being reflected in anyone else's copy. if not http_tunnel_required: headers = headers.copy() # type: ignore[attr-defined] headers.update(self.proxy_headers) # type: ignore[union-attr] # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] # Is this a closed/new connection that requires CONNECT tunnelling? if self.proxy is not None and http_tunnel_required and conn.is_closed: try: self._prepare_proxy(conn) except (BaseSSLError, OSError, SocketTimeout) as e: self._raise_timeout( err=e, url=self.proxy.url, timeout_value=conn.timeout ) raise # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Make the request on the HTTPConnection object > response = self._make_request( conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked, retries=retries, response_conn=response_conn, preload_content=preload_content, decode_content=decode_content, **response_kw, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request conn.request( ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request self.endheaders() /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders self._send_output(message_body, encode_chunked=encode_chunked) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output self.send(msg) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send self.connect() ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect self.sock = self._new_conn() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) except socket.gaierror as e: raise NameResolutionError(self.host, self, e) from e except SocketTimeout as e: raise ConnectTimeoutError( self, f"Connection to {self.host} timed out. (connect timeout={self.timeout})", ) from e except OSError as e: > raise NewConnectionError( self, f"Failed to establish a new connection: {e}" ) from e E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError The above exception was the direct cause of the following exception: self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: > resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen retries = retries.increment( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Retry(total=0, connect=None, read=False, redirect=None, status=None) method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1' response = None error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') _pool = _stacktrace = def increment( self, method: str | None = None, url: str | None = None, response: BaseHTTPResponse | None = None, error: Exception | None = None, _pool: ConnectionPool | None = None, _stacktrace: TracebackType | None = None, ) -> Self: """Return a new Retry object with incremented retry counters. :param response: A response object, or None, if the server did not return a response. :type response: :class:`~urllib3.response.BaseHTTPResponse` :param Exception error: An error encountered during the request, or None if the response was received successfully. :return: A new ``Retry`` object. """ if self.total is False and error: # Disabled, indicate to re-raise the error. raise reraise(type(error), error, _stacktrace) total = self.total if total is not None: total -= 1 connect = self.connect read = self.read redirect = self.redirect status_count = self.status other = self.other cause = "unknown" status = None redirect_location = None if error and self._is_connection_error(error): # Connect retry? if connect is False: raise reraise(type(error), error, _stacktrace) elif connect is not None: connect -= 1 elif error and self._is_read_error(error): # Read retry? if read is False or method is None or not self._is_method_retryable(method): raise reraise(type(error), error, _stacktrace) elif read is not None: read -= 1 elif error: # Other retry? if other is not None: other -= 1 elif response and response.get_redirect_location(): # Redirect retry? if redirect is not None: redirect -= 1 cause = "too many redirects" response_redirect_location = response.get_redirect_location() if response_redirect_location: redirect_location = response_redirect_location status = response.status else: # Incrementing because of a server error like a 500 in # status_forcelist and the given method is in the allowed_methods cause = ResponseError.GENERIC_ERROR if response and response.status: if status_count is not None: status_count -= 1 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) status = response.status history = self.history + ( RequestHistory(method, url, error, status, redirect_location), ) new_retry = self.new( total=total, connect=connect, read=read, redirect=redirect, status=status_count, other=other, history=history, ) if new_retry.is_exhausted(): reason = error or ResponseError(cause) > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError During handling of the above exception, another exception occurred: self = def test_12_xpdr_portmapping_CLIENT1(self): > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT1") transportpce_tests/1.2.1/test01_portmapping.py:144: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ transportpce_tests/common/test_utils.py:470: in get_portmapping_node_attr response = get_request(target_url) transportpce_tests/common/test_utils.py:116: in get_request return requests.request( ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request return session.request(method=method, url=url, **kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request resp = self.send(prep, **send_kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send r = adapter.send(request, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) except (ProtocolError, OSError) as err: raise ConnectionError(err, request=request) except MaxRetryError as e: if isinstance(e.reason, ConnectTimeoutError): # TODO: Remove this in 3.0.0: see #2811 if not isinstance(e.reason, NewConnectionError): raise ConnectTimeout(e, request=request) if isinstance(e.reason, ResponseError): raise RetryError(e, request=request) if isinstance(e.reason, _ProxyError): raise ProxyError(e, request=request) if isinstance(e.reason, _SSLError): # This branch is for urllib3 v1.22 and later. raise SSLError(e, request=request) > raise ConnectionError(e, request=request) E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT1 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError ----------------------------- Captured stdout call ----------------------------- execution of test_12_xpdr_portmapping_CLIENT1 _______ TransportPCEPortMappingTesting.test_13_xpdr_portmapping_CLIENT2 ________ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: > sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection raise err _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ address = ('localhost', 8182), timeout = 10, source_address = None socket_options = [(6, 1, 1)] def create_connection( address: tuple[str, int], timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, source_address: tuple[str, int] | None = None, socket_options: _TYPE_SOCKET_OPTIONS | None = None, ) -> socket.socket: """Connect to *address* and return the socket object. Convenience function. Connect to *address* (a 2-tuple ``(host, port)``) and return the socket object. Passing the optional *timeout* parameter will set the timeout on the socket instance before attempting to connect. If no *timeout* is supplied, the global default timeout setting returned by :func:`socket.getdefaulttimeout` is used. If *source_address* is set it must be a tuple of (host, port) for the socket to bind as a source address before making the connection. An host of '' or port 0 tells the OS to use the default. """ host, port = address if host.startswith("["): host = host.strip("[]") err = None # Using the value from allowed_gai_family() in the context of getaddrinfo lets # us select whether to work with IPv4 DNS records, IPv6 records, or both. # The original create_connection function always returns all records. family = allowed_gai_family() try: host.encode("idna") except UnicodeError: raise LocationParseError(f"'{host}', label empty or too long") from None for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): af, socktype, proto, canonname, sa = res sock = None try: sock = socket.socket(af, socktype, proto) # If provided, set socket level options before connecting. _set_socket_options(sock, socket_options) if timeout is not _DEFAULT_TIMEOUT: sock.settimeout(timeout) if source_address: sock.bind(source_address) > sock.connect(sa) E ConnectionRefusedError: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError The above exception was the direct cause of the following exception: self = method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2' body = None headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) redirect = False, assert_same_host = False timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None release_conn = False, chunked = False, body_pos = None, preload_content = False decode_content = False, response_kw = {} parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2', query=None, fragment=None) destination_scheme = None, conn = None, release_this_conn = True http_tunnel_required = False, err = None, clean_exit = False def urlopen( # type: ignore[override] self, method: str, url: str, body: _TYPE_BODY | None = None, headers: typing.Mapping[str, str] | None = None, retries: Retry | bool | int | None = None, redirect: bool = True, assert_same_host: bool = True, timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, pool_timeout: int | None = None, release_conn: bool | None = None, chunked: bool = False, body_pos: _TYPE_BODY_POSITION | None = None, preload_content: bool = True, decode_content: bool = True, **response_kw: typing.Any, ) -> BaseHTTPResponse: """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param url: The URL to perform the request on. :param body: Data to send in the request body, either :class:`str`, :class:`bytes`, an iterable of :class:`str`/:class:`bytes`, or a file-like object. :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When ``False``, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param bool preload_content: If True, the response's body will be preloaded into memory. :param bool decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``preload_content`` which defaults to ``True``. :param bool chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. """ parsed_url = parse_url(url) destination_scheme = parsed_url.scheme if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = preload_content # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) # Ensure that the URL we're connecting to is properly encoded if url.startswith("/"): url = to_str(_encode_target(url)) else: url = to_str(parsed_url.url) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn http_tunnel_required = connection_requires_http_tunnel( self.proxy, self.proxy_config, destination_scheme ) # Merge the proxy headers. Only done when not using HTTP CONNECT. We # have to copy the headers dict so we can safely change it without those # changes being reflected in anyone else's copy. if not http_tunnel_required: headers = headers.copy() # type: ignore[attr-defined] headers.update(self.proxy_headers) # type: ignore[union-attr] # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] # Is this a closed/new connection that requires CONNECT tunnelling? if self.proxy is not None and http_tunnel_required and conn.is_closed: try: self._prepare_proxy(conn) except (BaseSSLError, OSError, SocketTimeout) as e: self._raise_timeout( err=e, url=self.proxy.url, timeout_value=conn.timeout ) raise # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Make the request on the HTTPConnection object > response = self._make_request( conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked, retries=retries, response_conn=response_conn, preload_content=preload_content, decode_content=decode_content, **response_kw, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request conn.request( ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request self.endheaders() /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders self._send_output(message_body, encode_chunked=encode_chunked) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output self.send(msg) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send self.connect() ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect self.sock = self._new_conn() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) except socket.gaierror as e: raise NameResolutionError(self.host, self, e) from e except SocketTimeout as e: raise ConnectTimeoutError( self, f"Connection to {self.host} timed out. (connect timeout={self.timeout})", ) from e except OSError as e: > raise NewConnectionError( self, f"Failed to establish a new connection: {e}" ) from e E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError The above exception was the direct cause of the following exception: self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: > resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen retries = retries.increment( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Retry(total=0, connect=None, read=False, redirect=None, status=None) method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2' response = None error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') _pool = _stacktrace = def increment( self, method: str | None = None, url: str | None = None, response: BaseHTTPResponse | None = None, error: Exception | None = None, _pool: ConnectionPool | None = None, _stacktrace: TracebackType | None = None, ) -> Self: """Return a new Retry object with incremented retry counters. :param response: A response object, or None, if the server did not return a response. :type response: :class:`~urllib3.response.BaseHTTPResponse` :param Exception error: An error encountered during the request, or None if the response was received successfully. :return: A new ``Retry`` object. """ if self.total is False and error: # Disabled, indicate to re-raise the error. raise reraise(type(error), error, _stacktrace) total = self.total if total is not None: total -= 1 connect = self.connect read = self.read redirect = self.redirect status_count = self.status other = self.other cause = "unknown" status = None redirect_location = None if error and self._is_connection_error(error): # Connect retry? if connect is False: raise reraise(type(error), error, _stacktrace) elif connect is not None: connect -= 1 elif error and self._is_read_error(error): # Read retry? if read is False or method is None or not self._is_method_retryable(method): raise reraise(type(error), error, _stacktrace) elif read is not None: read -= 1 elif error: # Other retry? if other is not None: other -= 1 elif response and response.get_redirect_location(): # Redirect retry? if redirect is not None: redirect -= 1 cause = "too many redirects" response_redirect_location = response.get_redirect_location() if response_redirect_location: redirect_location = response_redirect_location status = response.status else: # Incrementing because of a server error like a 500 in # status_forcelist and the given method is in the allowed_methods cause = ResponseError.GENERIC_ERROR if response and response.status: if status_count is not None: status_count -= 1 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) status = response.status history = self.history + ( RequestHistory(method, url, error, status, redirect_location), ) new_retry = self.new( total=total, connect=connect, read=read, redirect=redirect, status=status_count, other=other, history=history, ) if new_retry.is_exhausted(): reason = error or ResponseError(cause) > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError During handling of the above exception, another exception occurred: self = def test_13_xpdr_portmapping_CLIENT2(self): > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT2") transportpce_tests/1.2.1/test01_portmapping.py:156: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ transportpce_tests/common/test_utils.py:470: in get_portmapping_node_attr response = get_request(target_url) transportpce_tests/common/test_utils.py:116: in get_request return requests.request( ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request return session.request(method=method, url=url, **kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request resp = self.send(prep, **send_kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send r = adapter.send(request, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) except (ProtocolError, OSError) as err: raise ConnectionError(err, request=request) except MaxRetryError as e: if isinstance(e.reason, ConnectTimeoutError): # TODO: Remove this in 3.0.0: see #2811 if not isinstance(e.reason, NewConnectionError): raise ConnectTimeout(e, request=request) if isinstance(e.reason, ResponseError): raise RetryError(e, request=request) if isinstance(e.reason, _ProxyError): raise ProxyError(e, request=request) if isinstance(e.reason, _SSLError): # This branch is for urllib3 v1.22 and later. raise SSLError(e, request=request) > raise ConnectionError(e, request=request) E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT2 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError ----------------------------- Captured stdout call ----------------------------- execution of test_13_xpdr_portmapping_CLIENT2 _______ TransportPCEPortMappingTesting.test_14_xpdr_portmapping_CLIENT3 ________ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: > sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection raise err _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ address = ('localhost', 8182), timeout = 10, source_address = None socket_options = [(6, 1, 1)] def create_connection( address: tuple[str, int], timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, source_address: tuple[str, int] | None = None, socket_options: _TYPE_SOCKET_OPTIONS | None = None, ) -> socket.socket: """Connect to *address* and return the socket object. Convenience function. Connect to *address* (a 2-tuple ``(host, port)``) and return the socket object. Passing the optional *timeout* parameter will set the timeout on the socket instance before attempting to connect. If no *timeout* is supplied, the global default timeout setting returned by :func:`socket.getdefaulttimeout` is used. If *source_address* is set it must be a tuple of (host, port) for the socket to bind as a source address before making the connection. An host of '' or port 0 tells the OS to use the default. """ host, port = address if host.startswith("["): host = host.strip("[]") err = None # Using the value from allowed_gai_family() in the context of getaddrinfo lets # us select whether to work with IPv4 DNS records, IPv6 records, or both. # The original create_connection function always returns all records. family = allowed_gai_family() try: host.encode("idna") except UnicodeError: raise LocationParseError(f"'{host}', label empty or too long") from None for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): af, socktype, proto, canonname, sa = res sock = None try: sock = socket.socket(af, socktype, proto) # If provided, set socket level options before connecting. _set_socket_options(sock, socket_options) if timeout is not _DEFAULT_TIMEOUT: sock.settimeout(timeout) if source_address: sock.bind(source_address) > sock.connect(sa) E ConnectionRefusedError: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError The above exception was the direct cause of the following exception: self = method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3' body = None headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) redirect = False, assert_same_host = False timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None release_conn = False, chunked = False, body_pos = None, preload_content = False decode_content = False, response_kw = {} parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3', query=None, fragment=None) destination_scheme = None, conn = None, release_this_conn = True http_tunnel_required = False, err = None, clean_exit = False def urlopen( # type: ignore[override] self, method: str, url: str, body: _TYPE_BODY | None = None, headers: typing.Mapping[str, str] | None = None, retries: Retry | bool | int | None = None, redirect: bool = True, assert_same_host: bool = True, timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, pool_timeout: int | None = None, release_conn: bool | None = None, chunked: bool = False, body_pos: _TYPE_BODY_POSITION | None = None, preload_content: bool = True, decode_content: bool = True, **response_kw: typing.Any, ) -> BaseHTTPResponse: """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param url: The URL to perform the request on. :param body: Data to send in the request body, either :class:`str`, :class:`bytes`, an iterable of :class:`str`/:class:`bytes`, or a file-like object. :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When ``False``, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param bool preload_content: If True, the response's body will be preloaded into memory. :param bool decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``preload_content`` which defaults to ``True``. :param bool chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. """ parsed_url = parse_url(url) destination_scheme = parsed_url.scheme if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = preload_content # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) # Ensure that the URL we're connecting to is properly encoded if url.startswith("/"): url = to_str(_encode_target(url)) else: url = to_str(parsed_url.url) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn http_tunnel_required = connection_requires_http_tunnel( self.proxy, self.proxy_config, destination_scheme ) # Merge the proxy headers. Only done when not using HTTP CONNECT. We # have to copy the headers dict so we can safely change it without those # changes being reflected in anyone else's copy. if not http_tunnel_required: headers = headers.copy() # type: ignore[attr-defined] headers.update(self.proxy_headers) # type: ignore[union-attr] # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] # Is this a closed/new connection that requires CONNECT tunnelling? if self.proxy is not None and http_tunnel_required and conn.is_closed: try: self._prepare_proxy(conn) except (BaseSSLError, OSError, SocketTimeout) as e: self._raise_timeout( err=e, url=self.proxy.url, timeout_value=conn.timeout ) raise # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Make the request on the HTTPConnection object > response = self._make_request( conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked, retries=retries, response_conn=response_conn, preload_content=preload_content, decode_content=decode_content, **response_kw, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request conn.request( ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request self.endheaders() /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders self._send_output(message_body, encode_chunked=encode_chunked) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output self.send(msg) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send self.connect() ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect self.sock = self._new_conn() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) except socket.gaierror as e: raise NameResolutionError(self.host, self, e) from e except SocketTimeout as e: raise ConnectTimeoutError( self, f"Connection to {self.host} timed out. (connect timeout={self.timeout})", ) from e except OSError as e: > raise NewConnectionError( self, f"Failed to establish a new connection: {e}" ) from e E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError The above exception was the direct cause of the following exception: self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: > resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen retries = retries.increment( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Retry(total=0, connect=None, read=False, redirect=None, status=None) method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3' response = None error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') _pool = _stacktrace = def increment( self, method: str | None = None, url: str | None = None, response: BaseHTTPResponse | None = None, error: Exception | None = None, _pool: ConnectionPool | None = None, _stacktrace: TracebackType | None = None, ) -> Self: """Return a new Retry object with incremented retry counters. :param response: A response object, or None, if the server did not return a response. :type response: :class:`~urllib3.response.BaseHTTPResponse` :param Exception error: An error encountered during the request, or None if the response was received successfully. :return: A new ``Retry`` object. """ if self.total is False and error: # Disabled, indicate to re-raise the error. raise reraise(type(error), error, _stacktrace) total = self.total if total is not None: total -= 1 connect = self.connect read = self.read redirect = self.redirect status_count = self.status other = self.other cause = "unknown" status = None redirect_location = None if error and self._is_connection_error(error): # Connect retry? if connect is False: raise reraise(type(error), error, _stacktrace) elif connect is not None: connect -= 1 elif error and self._is_read_error(error): # Read retry? if read is False or method is None or not self._is_method_retryable(method): raise reraise(type(error), error, _stacktrace) elif read is not None: read -= 1 elif error: # Other retry? if other is not None: other -= 1 elif response and response.get_redirect_location(): # Redirect retry? if redirect is not None: redirect -= 1 cause = "too many redirects" response_redirect_location = response.get_redirect_location() if response_redirect_location: redirect_location = response_redirect_location status = response.status else: # Incrementing because of a server error like a 500 in # status_forcelist and the given method is in the allowed_methods cause = ResponseError.GENERIC_ERROR if response and response.status: if status_count is not None: status_count -= 1 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) status = response.status history = self.history + ( RequestHistory(method, url, error, status, redirect_location), ) new_retry = self.new( total=total, connect=connect, read=read, redirect=redirect, status=status_count, other=other, history=history, ) if new_retry.is_exhausted(): reason = error or ResponseError(cause) > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError During handling of the above exception, another exception occurred: self = def test_14_xpdr_portmapping_CLIENT3(self): > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT3") transportpce_tests/1.2.1/test01_portmapping.py:168: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ transportpce_tests/common/test_utils.py:470: in get_portmapping_node_attr response = get_request(target_url) transportpce_tests/common/test_utils.py:116: in get_request return requests.request( ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request return session.request(method=method, url=url, **kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request resp = self.send(prep, **send_kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send r = adapter.send(request, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) except (ProtocolError, OSError) as err: raise ConnectionError(err, request=request) except MaxRetryError as e: if isinstance(e.reason, ConnectTimeoutError): # TODO: Remove this in 3.0.0: see #2811 if not isinstance(e.reason, NewConnectionError): raise ConnectTimeout(e, request=request) if isinstance(e.reason, ResponseError): raise RetryError(e, request=request) if isinstance(e.reason, _ProxyError): raise ProxyError(e, request=request) if isinstance(e.reason, _SSLError): # This branch is for urllib3 v1.22 and later. raise SSLError(e, request=request) > raise ConnectionError(e, request=request) E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT3 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError ----------------------------- Captured stdout call ----------------------------- execution of test_14_xpdr_portmapping_CLIENT3 _______ TransportPCEPortMappingTesting.test_15_xpdr_portmapping_CLIENT4 ________ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: > sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection raise err _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ address = ('localhost', 8182), timeout = 10, source_address = None socket_options = [(6, 1, 1)] def create_connection( address: tuple[str, int], timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, source_address: tuple[str, int] | None = None, socket_options: _TYPE_SOCKET_OPTIONS | None = None, ) -> socket.socket: """Connect to *address* and return the socket object. Convenience function. Connect to *address* (a 2-tuple ``(host, port)``) and return the socket object. Passing the optional *timeout* parameter will set the timeout on the socket instance before attempting to connect. If no *timeout* is supplied, the global default timeout setting returned by :func:`socket.getdefaulttimeout` is used. If *source_address* is set it must be a tuple of (host, port) for the socket to bind as a source address before making the connection. An host of '' or port 0 tells the OS to use the default. """ host, port = address if host.startswith("["): host = host.strip("[]") err = None # Using the value from allowed_gai_family() in the context of getaddrinfo lets # us select whether to work with IPv4 DNS records, IPv6 records, or both. # The original create_connection function always returns all records. family = allowed_gai_family() try: host.encode("idna") except UnicodeError: raise LocationParseError(f"'{host}', label empty or too long") from None for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): af, socktype, proto, canonname, sa = res sock = None try: sock = socket.socket(af, socktype, proto) # If provided, set socket level options before connecting. _set_socket_options(sock, socket_options) if timeout is not _DEFAULT_TIMEOUT: sock.settimeout(timeout) if source_address: sock.bind(source_address) > sock.connect(sa) E ConnectionRefusedError: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError The above exception was the direct cause of the following exception: self = method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4' body = None headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) redirect = False, assert_same_host = False timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None release_conn = False, chunked = False, body_pos = None, preload_content = False decode_content = False, response_kw = {} parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4', query=None, fragment=None) destination_scheme = None, conn = None, release_this_conn = True http_tunnel_required = False, err = None, clean_exit = False def urlopen( # type: ignore[override] self, method: str, url: str, body: _TYPE_BODY | None = None, headers: typing.Mapping[str, str] | None = None, retries: Retry | bool | int | None = None, redirect: bool = True, assert_same_host: bool = True, timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, pool_timeout: int | None = None, release_conn: bool | None = None, chunked: bool = False, body_pos: _TYPE_BODY_POSITION | None = None, preload_content: bool = True, decode_content: bool = True, **response_kw: typing.Any, ) -> BaseHTTPResponse: """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param url: The URL to perform the request on. :param body: Data to send in the request body, either :class:`str`, :class:`bytes`, an iterable of :class:`str`/:class:`bytes`, or a file-like object. :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When ``False``, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param bool preload_content: If True, the response's body will be preloaded into memory. :param bool decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``preload_content`` which defaults to ``True``. :param bool chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. """ parsed_url = parse_url(url) destination_scheme = parsed_url.scheme if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = preload_content # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) # Ensure that the URL we're connecting to is properly encoded if url.startswith("/"): url = to_str(_encode_target(url)) else: url = to_str(parsed_url.url) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn http_tunnel_required = connection_requires_http_tunnel( self.proxy, self.proxy_config, destination_scheme ) # Merge the proxy headers. Only done when not using HTTP CONNECT. We # have to copy the headers dict so we can safely change it without those # changes being reflected in anyone else's copy. if not http_tunnel_required: headers = headers.copy() # type: ignore[attr-defined] headers.update(self.proxy_headers) # type: ignore[union-attr] # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] # Is this a closed/new connection that requires CONNECT tunnelling? if self.proxy is not None and http_tunnel_required and conn.is_closed: try: self._prepare_proxy(conn) except (BaseSSLError, OSError, SocketTimeout) as e: self._raise_timeout( err=e, url=self.proxy.url, timeout_value=conn.timeout ) raise # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Make the request on the HTTPConnection object > response = self._make_request( conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked, retries=retries, response_conn=response_conn, preload_content=preload_content, decode_content=decode_content, **response_kw, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request conn.request( ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request self.endheaders() /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders self._send_output(message_body, encode_chunked=encode_chunked) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output self.send(msg) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send self.connect() ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect self.sock = self._new_conn() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) except socket.gaierror as e: raise NameResolutionError(self.host, self, e) from e except SocketTimeout as e: raise ConnectTimeoutError( self, f"Connection to {self.host} timed out. (connect timeout={self.timeout})", ) from e except OSError as e: > raise NewConnectionError( self, f"Failed to establish a new connection: {e}" ) from e E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError The above exception was the direct cause of the following exception: self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: > resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen retries = retries.increment( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Retry(total=0, connect=None, read=False, redirect=None, status=None) method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4' response = None error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') _pool = _stacktrace = def increment( self, method: str | None = None, url: str | None = None, response: BaseHTTPResponse | None = None, error: Exception | None = None, _pool: ConnectionPool | None = None, _stacktrace: TracebackType | None = None, ) -> Self: """Return a new Retry object with incremented retry counters. :param response: A response object, or None, if the server did not return a response. :type response: :class:`~urllib3.response.BaseHTTPResponse` :param Exception error: An error encountered during the request, or None if the response was received successfully. :return: A new ``Retry`` object. """ if self.total is False and error: # Disabled, indicate to re-raise the error. raise reraise(type(error), error, _stacktrace) total = self.total if total is not None: total -= 1 connect = self.connect read = self.read redirect = self.redirect status_count = self.status other = self.other cause = "unknown" status = None redirect_location = None if error and self._is_connection_error(error): # Connect retry? if connect is False: raise reraise(type(error), error, _stacktrace) elif connect is not None: connect -= 1 elif error and self._is_read_error(error): # Read retry? if read is False or method is None or not self._is_method_retryable(method): raise reraise(type(error), error, _stacktrace) elif read is not None: read -= 1 elif error: # Other retry? if other is not None: other -= 1 elif response and response.get_redirect_location(): # Redirect retry? if redirect is not None: redirect -= 1 cause = "too many redirects" response_redirect_location = response.get_redirect_location() if response_redirect_location: redirect_location = response_redirect_location status = response.status else: # Incrementing because of a server error like a 500 in # status_forcelist and the given method is in the allowed_methods cause = ResponseError.GENERIC_ERROR if response and response.status: if status_count is not None: status_count -= 1 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) status = response.status history = self.history + ( RequestHistory(method, url, error, status, redirect_location), ) new_retry = self.new( total=total, connect=connect, read=read, redirect=redirect, status=status_count, other=other, history=history, ) if new_retry.is_exhausted(): reason = error or ResponseError(cause) > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError During handling of the above exception, another exception occurred: self = def test_15_xpdr_portmapping_CLIENT4(self): > response = test_utils.get_portmapping_node_attr("XPDRA01", "mapping", "XPDR1-CLIENT4") transportpce_tests/1.2.1/test01_portmapping.py:180: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ transportpce_tests/common/test_utils.py:470: in get_portmapping_node_attr response = get_request(target_url) transportpce_tests/common/test_utils.py:116: in get_request return requests.request( ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request return session.request(method=method, url=url, **kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request resp = self.send(prep, **send_kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send r = adapter.send(request, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) except (ProtocolError, OSError) as err: raise ConnectionError(err, request=request) except MaxRetryError as e: if isinstance(e.reason, ConnectTimeoutError): # TODO: Remove this in 3.0.0: see #2811 if not isinstance(e.reason, NewConnectionError): raise ConnectTimeout(e, request=request) if isinstance(e.reason, ResponseError): raise RetryError(e, request=request) if isinstance(e.reason, _ProxyError): raise ProxyError(e, request=request) if isinstance(e.reason, _SSLError): # This branch is for urllib3 v1.22 and later. raise SSLError(e, request=request) > raise ConnectionError(e, request=request) E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/mapping=XPDR1-CLIENT4 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError ----------------------------- Captured stdout call ----------------------------- execution of test_15_xpdr_portmapping_CLIENT4 _______ TransportPCEPortMappingTesting.test_16_xpdr_device_disconnection _______ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: > sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection raise err _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ address = ('localhost', 8182), timeout = 10, source_address = None socket_options = [(6, 1, 1)] def create_connection( address: tuple[str, int], timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, source_address: tuple[str, int] | None = None, socket_options: _TYPE_SOCKET_OPTIONS | None = None, ) -> socket.socket: """Connect to *address* and return the socket object. Convenience function. Connect to *address* (a 2-tuple ``(host, port)``) and return the socket object. Passing the optional *timeout* parameter will set the timeout on the socket instance before attempting to connect. If no *timeout* is supplied, the global default timeout setting returned by :func:`socket.getdefaulttimeout` is used. If *source_address* is set it must be a tuple of (host, port) for the socket to bind as a source address before making the connection. An host of '' or port 0 tells the OS to use the default. """ host, port = address if host.startswith("["): host = host.strip("[]") err = None # Using the value from allowed_gai_family() in the context of getaddrinfo lets # us select whether to work with IPv4 DNS records, IPv6 records, or both. # The original create_connection function always returns all records. family = allowed_gai_family() try: host.encode("idna") except UnicodeError: raise LocationParseError(f"'{host}', label empty or too long") from None for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): af, socktype, proto, canonname, sa = res sock = None try: sock = socket.socket(af, socktype, proto) # If provided, set socket level options before connecting. _set_socket_options(sock, socket_options) if timeout is not _DEFAULT_TIMEOUT: sock.settimeout(timeout) if source_address: sock.bind(source_address) > sock.connect(sa) E ConnectionRefusedError: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError The above exception was the direct cause of the following exception: self = method = 'DELETE' url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01' body = None headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) redirect = False, assert_same_host = False timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None release_conn = False, chunked = False, body_pos = None, preload_content = False decode_content = False, response_kw = {} parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01', query=None, fragment=None) destination_scheme = None, conn = None, release_this_conn = True http_tunnel_required = False, err = None, clean_exit = False def urlopen( # type: ignore[override] self, method: str, url: str, body: _TYPE_BODY | None = None, headers: typing.Mapping[str, str] | None = None, retries: Retry | bool | int | None = None, redirect: bool = True, assert_same_host: bool = True, timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, pool_timeout: int | None = None, release_conn: bool | None = None, chunked: bool = False, body_pos: _TYPE_BODY_POSITION | None = None, preload_content: bool = True, decode_content: bool = True, **response_kw: typing.Any, ) -> BaseHTTPResponse: """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param url: The URL to perform the request on. :param body: Data to send in the request body, either :class:`str`, :class:`bytes`, an iterable of :class:`str`/:class:`bytes`, or a file-like object. :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When ``False``, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param bool preload_content: If True, the response's body will be preloaded into memory. :param bool decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``preload_content`` which defaults to ``True``. :param bool chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. """ parsed_url = parse_url(url) destination_scheme = parsed_url.scheme if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = preload_content # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) # Ensure that the URL we're connecting to is properly encoded if url.startswith("/"): url = to_str(_encode_target(url)) else: url = to_str(parsed_url.url) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn http_tunnel_required = connection_requires_http_tunnel( self.proxy, self.proxy_config, destination_scheme ) # Merge the proxy headers. Only done when not using HTTP CONNECT. We # have to copy the headers dict so we can safely change it without those # changes being reflected in anyone else's copy. if not http_tunnel_required: headers = headers.copy() # type: ignore[attr-defined] headers.update(self.proxy_headers) # type: ignore[union-attr] # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] # Is this a closed/new connection that requires CONNECT tunnelling? if self.proxy is not None and http_tunnel_required and conn.is_closed: try: self._prepare_proxy(conn) except (BaseSSLError, OSError, SocketTimeout) as e: self._raise_timeout( err=e, url=self.proxy.url, timeout_value=conn.timeout ) raise # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Make the request on the HTTPConnection object > response = self._make_request( conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked, retries=retries, response_conn=response_conn, preload_content=preload_content, decode_content=decode_content, **response_kw, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request conn.request( ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request self.endheaders() /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders self._send_output(message_body, encode_chunked=encode_chunked) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output self.send(msg) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send self.connect() ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect self.sock = self._new_conn() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) except socket.gaierror as e: raise NameResolutionError(self.host, self, e) from e except SocketTimeout as e: raise ConnectTimeoutError( self, f"Connection to {self.host} timed out. (connect timeout={self.timeout})", ) from e except OSError as e: > raise NewConnectionError( self, f"Failed to establish a new connection: {e}" ) from e E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError The above exception was the direct cause of the following exception: self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: > resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen retries = retries.increment( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Retry(total=0, connect=None, read=False, redirect=None, status=None) method = 'DELETE' url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01' response = None error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') _pool = _stacktrace = def increment( self, method: str | None = None, url: str | None = None, response: BaseHTTPResponse | None = None, error: Exception | None = None, _pool: ConnectionPool | None = None, _stacktrace: TracebackType | None = None, ) -> Self: """Return a new Retry object with incremented retry counters. :param response: A response object, or None, if the server did not return a response. :type response: :class:`~urllib3.response.BaseHTTPResponse` :param Exception error: An error encountered during the request, or None if the response was received successfully. :return: A new ``Retry`` object. """ if self.total is False and error: # Disabled, indicate to re-raise the error. raise reraise(type(error), error, _stacktrace) total = self.total if total is not None: total -= 1 connect = self.connect read = self.read redirect = self.redirect status_count = self.status other = self.other cause = "unknown" status = None redirect_location = None if error and self._is_connection_error(error): # Connect retry? if connect is False: raise reraise(type(error), error, _stacktrace) elif connect is not None: connect -= 1 elif error and self._is_read_error(error): # Read retry? if read is False or method is None or not self._is_method_retryable(method): raise reraise(type(error), error, _stacktrace) elif read is not None: read -= 1 elif error: # Other retry? if other is not None: other -= 1 elif response and response.get_redirect_location(): # Redirect retry? if redirect is not None: redirect -= 1 cause = "too many redirects" response_redirect_location = response.get_redirect_location() if response_redirect_location: redirect_location = response_redirect_location status = response.status else: # Incrementing because of a server error like a 500 in # status_forcelist and the given method is in the allowed_methods cause = ResponseError.GENERIC_ERROR if response and response.status: if status_count is not None: status_count -= 1 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) status = response.status history = self.history + ( RequestHistory(method, url, error, status, redirect_location), ) new_retry = self.new( total=total, connect=connect, read=read, redirect=redirect, status=status_count, other=other, history=history, ) if new_retry.is_exhausted(): reason = error or ResponseError(cause) > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError During handling of the above exception, another exception occurred: self = def test_16_xpdr_device_disconnection(self): > response = test_utils.unmount_device("XPDRA01") transportpce_tests/1.2.1/test01_portmapping.py:191: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ transportpce_tests/common/test_utils.py:358: in unmount_device response = delete_request(url[RESTCONF_VERSION].format('{}', node)) transportpce_tests/common/test_utils.py:133: in delete_request return requests.request( ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request return session.request(method=method, url=url, **kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request resp = self.send(prep, **send_kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send r = adapter.send(request, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) except (ProtocolError, OSError) as err: raise ConnectionError(err, request=request) except MaxRetryError as e: if isinstance(e.reason, ConnectTimeoutError): # TODO: Remove this in 3.0.0: see #2811 if not isinstance(e.reason, NewConnectionError): raise ConnectTimeout(e, request=request) if isinstance(e.reason, ResponseError): raise RetryError(e, request=request) if isinstance(e.reason, _ProxyError): raise ProxyError(e, request=request) if isinstance(e.reason, _SSLError): # This branch is for urllib3 v1.22 and later. raise SSLError(e, request=request) > raise ConnectionError(e, request=request) E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError ----------------------------- Captured stdout call ----------------------------- execution of test_16_xpdr_device_disconnection _______ TransportPCEPortMappingTesting.test_17_xpdr_device_disconnected ________ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: > sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection raise err _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ address = ('localhost', 8182), timeout = 10, source_address = None socket_options = [(6, 1, 1)] def create_connection( address: tuple[str, int], timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, source_address: tuple[str, int] | None = None, socket_options: _TYPE_SOCKET_OPTIONS | None = None, ) -> socket.socket: """Connect to *address* and return the socket object. Convenience function. Connect to *address* (a 2-tuple ``(host, port)``) and return the socket object. Passing the optional *timeout* parameter will set the timeout on the socket instance before attempting to connect. If no *timeout* is supplied, the global default timeout setting returned by :func:`socket.getdefaulttimeout` is used. If *source_address* is set it must be a tuple of (host, port) for the socket to bind as a source address before making the connection. An host of '' or port 0 tells the OS to use the default. """ host, port = address if host.startswith("["): host = host.strip("[]") err = None # Using the value from allowed_gai_family() in the context of getaddrinfo lets # us select whether to work with IPv4 DNS records, IPv6 records, or both. # The original create_connection function always returns all records. family = allowed_gai_family() try: host.encode("idna") except UnicodeError: raise LocationParseError(f"'{host}', label empty or too long") from None for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): af, socktype, proto, canonname, sa = res sock = None try: sock = socket.socket(af, socktype, proto) # If provided, set socket level options before connecting. _set_socket_options(sock, socket_options) if timeout is not _DEFAULT_TIMEOUT: sock.settimeout(timeout) if source_address: sock.bind(source_address) > sock.connect(sa) E ConnectionRefusedError: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError The above exception was the direct cause of the following exception: self = method = 'GET' url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig' body = None headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) redirect = False, assert_same_host = False timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None release_conn = False, chunked = False, body_pos = None, preload_content = False decode_content = False, response_kw = {} parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01', query='content=nonconfig', fragment=None) destination_scheme = None, conn = None, release_this_conn = True http_tunnel_required = False, err = None, clean_exit = False def urlopen( # type: ignore[override] self, method: str, url: str, body: _TYPE_BODY | None = None, headers: typing.Mapping[str, str] | None = None, retries: Retry | bool | int | None = None, redirect: bool = True, assert_same_host: bool = True, timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, pool_timeout: int | None = None, release_conn: bool | None = None, chunked: bool = False, body_pos: _TYPE_BODY_POSITION | None = None, preload_content: bool = True, decode_content: bool = True, **response_kw: typing.Any, ) -> BaseHTTPResponse: """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param url: The URL to perform the request on. :param body: Data to send in the request body, either :class:`str`, :class:`bytes`, an iterable of :class:`str`/:class:`bytes`, or a file-like object. :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When ``False``, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param bool preload_content: If True, the response's body will be preloaded into memory. :param bool decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``preload_content`` which defaults to ``True``. :param bool chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. """ parsed_url = parse_url(url) destination_scheme = parsed_url.scheme if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = preload_content # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) # Ensure that the URL we're connecting to is properly encoded if url.startswith("/"): url = to_str(_encode_target(url)) else: url = to_str(parsed_url.url) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn http_tunnel_required = connection_requires_http_tunnel( self.proxy, self.proxy_config, destination_scheme ) # Merge the proxy headers. Only done when not using HTTP CONNECT. We # have to copy the headers dict so we can safely change it without those # changes being reflected in anyone else's copy. if not http_tunnel_required: headers = headers.copy() # type: ignore[attr-defined] headers.update(self.proxy_headers) # type: ignore[union-attr] # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] # Is this a closed/new connection that requires CONNECT tunnelling? if self.proxy is not None and http_tunnel_required and conn.is_closed: try: self._prepare_proxy(conn) except (BaseSSLError, OSError, SocketTimeout) as e: self._raise_timeout( err=e, url=self.proxy.url, timeout_value=conn.timeout ) raise # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Make the request on the HTTPConnection object > response = self._make_request( conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked, retries=retries, response_conn=response_conn, preload_content=preload_content, decode_content=decode_content, **response_kw, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request conn.request( ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request self.endheaders() /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders self._send_output(message_body, encode_chunked=encode_chunked) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output self.send(msg) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send self.connect() ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect self.sock = self._new_conn() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) except socket.gaierror as e: raise NameResolutionError(self.host, self, e) from e except SocketTimeout as e: raise ConnectTimeoutError( self, f"Connection to {self.host} timed out. (connect timeout={self.timeout})", ) from e except OSError as e: > raise NewConnectionError( self, f"Failed to establish a new connection: {e}" ) from e E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError The above exception was the direct cause of the following exception: self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: > resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen retries = retries.increment( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Retry(total=0, connect=None, read=False, redirect=None, status=None) method = 'GET' url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig' response = None error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') _pool = _stacktrace = def increment( self, method: str | None = None, url: str | None = None, response: BaseHTTPResponse | None = None, error: Exception | None = None, _pool: ConnectionPool | None = None, _stacktrace: TracebackType | None = None, ) -> Self: """Return a new Retry object with incremented retry counters. :param response: A response object, or None, if the server did not return a response. :type response: :class:`~urllib3.response.BaseHTTPResponse` :param Exception error: An error encountered during the request, or None if the response was received successfully. :return: A new ``Retry`` object. """ if self.total is False and error: # Disabled, indicate to re-raise the error. raise reraise(type(error), error, _stacktrace) total = self.total if total is not None: total -= 1 connect = self.connect read = self.read redirect = self.redirect status_count = self.status other = self.other cause = "unknown" status = None redirect_location = None if error and self._is_connection_error(error): # Connect retry? if connect is False: raise reraise(type(error), error, _stacktrace) elif connect is not None: connect -= 1 elif error and self._is_read_error(error): # Read retry? if read is False or method is None or not self._is_method_retryable(method): raise reraise(type(error), error, _stacktrace) elif read is not None: read -= 1 elif error: # Other retry? if other is not None: other -= 1 elif response and response.get_redirect_location(): # Redirect retry? if redirect is not None: redirect -= 1 cause = "too many redirects" response_redirect_location = response.get_redirect_location() if response_redirect_location: redirect_location = response_redirect_location status = response.status else: # Incrementing because of a server error like a 500 in # status_forcelist and the given method is in the allowed_methods cause = ResponseError.GENERIC_ERROR if response and response.status: if status_count is not None: status_count -= 1 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) status = response.status history = self.history + ( RequestHistory(method, url, error, status, redirect_location), ) new_retry = self.new( total=total, connect=connect, read=read, redirect=redirect, status=status_count, other=other, history=history, ) if new_retry.is_exhausted(): reason = error or ResponseError(cause) > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError During handling of the above exception, another exception occurred: self = def test_17_xpdr_device_disconnected(self): > response = test_utils.check_device_connection("XPDRA01") transportpce_tests/1.2.1/test01_portmapping.py:195: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ transportpce_tests/common/test_utils.py:369: in check_device_connection response = get_request(url[RESTCONF_VERSION].format('{}', node)) transportpce_tests/common/test_utils.py:116: in get_request return requests.request( ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request return session.request(method=method, url=url, **kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request resp = self.send(prep, **send_kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send r = adapter.send(request, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) except (ProtocolError, OSError) as err: raise ConnectionError(err, request=request) except MaxRetryError as e: if isinstance(e.reason, ConnectTimeoutError): # TODO: Remove this in 3.0.0: see #2811 if not isinstance(e.reason, NewConnectionError): raise ConnectTimeout(e, request=request) if isinstance(e.reason, ResponseError): raise RetryError(e, request=request) if isinstance(e.reason, _ProxyError): raise ProxyError(e, request=request) if isinstance(e.reason, _SSLError): # This branch is for urllib3 v1.22 and later. raise SSLError(e, request=request) > raise ConnectionError(e, request=request) E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=XPDRA01?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError ----------------------------- Captured stdout call ----------------------------- execution of test_17_xpdr_device_disconnected _______ TransportPCEPortMappingTesting.test_18_xpdr_device_not_connected _______ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: > sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection raise err _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ address = ('localhost', 8182), timeout = 10, source_address = None socket_options = [(6, 1, 1)] def create_connection( address: tuple[str, int], timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, source_address: tuple[str, int] | None = None, socket_options: _TYPE_SOCKET_OPTIONS | None = None, ) -> socket.socket: """Connect to *address* and return the socket object. Convenience function. Connect to *address* (a 2-tuple ``(host, port)``) and return the socket object. Passing the optional *timeout* parameter will set the timeout on the socket instance before attempting to connect. If no *timeout* is supplied, the global default timeout setting returned by :func:`socket.getdefaulttimeout` is used. If *source_address* is set it must be a tuple of (host, port) for the socket to bind as a source address before making the connection. An host of '' or port 0 tells the OS to use the default. """ host, port = address if host.startswith("["): host = host.strip("[]") err = None # Using the value from allowed_gai_family() in the context of getaddrinfo lets # us select whether to work with IPv4 DNS records, IPv6 records, or both. # The original create_connection function always returns all records. family = allowed_gai_family() try: host.encode("idna") except UnicodeError: raise LocationParseError(f"'{host}', label empty or too long") from None for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): af, socktype, proto, canonname, sa = res sock = None try: sock = socket.socket(af, socktype, proto) # If provided, set socket level options before connecting. _set_socket_options(sock, socket_options) if timeout is not _DEFAULT_TIMEOUT: sock.settimeout(timeout) if source_address: sock.bind(source_address) > sock.connect(sa) E ConnectionRefusedError: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError The above exception was the direct cause of the following exception: self = method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info' body = None headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) redirect = False, assert_same_host = False timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None release_conn = False, chunked = False, body_pos = None, preload_content = False decode_content = False, response_kw = {} parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info', query=None, fragment=None) destination_scheme = None, conn = None, release_this_conn = True http_tunnel_required = False, err = None, clean_exit = False def urlopen( # type: ignore[override] self, method: str, url: str, body: _TYPE_BODY | None = None, headers: typing.Mapping[str, str] | None = None, retries: Retry | bool | int | None = None, redirect: bool = True, assert_same_host: bool = True, timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, pool_timeout: int | None = None, release_conn: bool | None = None, chunked: bool = False, body_pos: _TYPE_BODY_POSITION | None = None, preload_content: bool = True, decode_content: bool = True, **response_kw: typing.Any, ) -> BaseHTTPResponse: """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param url: The URL to perform the request on. :param body: Data to send in the request body, either :class:`str`, :class:`bytes`, an iterable of :class:`str`/:class:`bytes`, or a file-like object. :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When ``False``, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param bool preload_content: If True, the response's body will be preloaded into memory. :param bool decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``preload_content`` which defaults to ``True``. :param bool chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. """ parsed_url = parse_url(url) destination_scheme = parsed_url.scheme if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = preload_content # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) # Ensure that the URL we're connecting to is properly encoded if url.startswith("/"): url = to_str(_encode_target(url)) else: url = to_str(parsed_url.url) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn http_tunnel_required = connection_requires_http_tunnel( self.proxy, self.proxy_config, destination_scheme ) # Merge the proxy headers. Only done when not using HTTP CONNECT. We # have to copy the headers dict so we can safely change it without those # changes being reflected in anyone else's copy. if not http_tunnel_required: headers = headers.copy() # type: ignore[attr-defined] headers.update(self.proxy_headers) # type: ignore[union-attr] # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] # Is this a closed/new connection that requires CONNECT tunnelling? if self.proxy is not None and http_tunnel_required and conn.is_closed: try: self._prepare_proxy(conn) except (BaseSSLError, OSError, SocketTimeout) as e: self._raise_timeout( err=e, url=self.proxy.url, timeout_value=conn.timeout ) raise # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Make the request on the HTTPConnection object > response = self._make_request( conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked, retries=retries, response_conn=response_conn, preload_content=preload_content, decode_content=decode_content, **response_kw, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request conn.request( ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request self.endheaders() /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders self._send_output(message_body, encode_chunked=encode_chunked) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output self.send(msg) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send self.connect() ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect self.sock = self._new_conn() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) except socket.gaierror as e: raise NameResolutionError(self.host, self, e) from e except SocketTimeout as e: raise ConnectTimeoutError( self, f"Connection to {self.host} timed out. (connect timeout={self.timeout})", ) from e except OSError as e: > raise NewConnectionError( self, f"Failed to establish a new connection: {e}" ) from e E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError The above exception was the direct cause of the following exception: self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: > resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen retries = retries.increment( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Retry(total=0, connect=None, read=False, redirect=None, status=None) method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info' response = None error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') _pool = _stacktrace = def increment( self, method: str | None = None, url: str | None = None, response: BaseHTTPResponse | None = None, error: Exception | None = None, _pool: ConnectionPool | None = None, _stacktrace: TracebackType | None = None, ) -> Self: """Return a new Retry object with incremented retry counters. :param response: A response object, or None, if the server did not return a response. :type response: :class:`~urllib3.response.BaseHTTPResponse` :param Exception error: An error encountered during the request, or None if the response was received successfully. :return: A new ``Retry`` object. """ if self.total is False and error: # Disabled, indicate to re-raise the error. raise reraise(type(error), error, _stacktrace) total = self.total if total is not None: total -= 1 connect = self.connect read = self.read redirect = self.redirect status_count = self.status other = self.other cause = "unknown" status = None redirect_location = None if error and self._is_connection_error(error): # Connect retry? if connect is False: raise reraise(type(error), error, _stacktrace) elif connect is not None: connect -= 1 elif error and self._is_read_error(error): # Read retry? if read is False or method is None or not self._is_method_retryable(method): raise reraise(type(error), error, _stacktrace) elif read is not None: read -= 1 elif error: # Other retry? if other is not None: other -= 1 elif response and response.get_redirect_location(): # Redirect retry? if redirect is not None: redirect -= 1 cause = "too many redirects" response_redirect_location = response.get_redirect_location() if response_redirect_location: redirect_location = response_redirect_location status = response.status else: # Incrementing because of a server error like a 500 in # status_forcelist and the given method is in the allowed_methods cause = ResponseError.GENERIC_ERROR if response and response.status: if status_count is not None: status_count -= 1 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) status = response.status history = self.history + ( RequestHistory(method, url, error, status, redirect_location), ) new_retry = self.new( total=total, connect=connect, read=read, redirect=redirect, status=status_count, other=other, history=history, ) if new_retry.is_exhausted(): reason = error or ResponseError(cause) > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError During handling of the above exception, another exception occurred: self = def test_18_xpdr_device_not_connected(self): > response = test_utils.get_portmapping_node_attr("XPDRA01", "node-info", None) transportpce_tests/1.2.1/test01_portmapping.py:203: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ transportpce_tests/common/test_utils.py:470: in get_portmapping_node_attr response = get_request(target_url) transportpce_tests/common/test_utils.py:116: in get_request return requests.request( ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request return session.request(method=method, url=url, **kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request resp = self.send(prep, **send_kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send r = adapter.send(request, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) except (ProtocolError, OSError) as err: raise ConnectionError(err, request=request) except MaxRetryError as e: if isinstance(e.reason, ConnectTimeoutError): # TODO: Remove this in 3.0.0: see #2811 if not isinstance(e.reason, NewConnectionError): raise ConnectTimeout(e, request=request) if isinstance(e.reason, ResponseError): raise RetryError(e, request=request) if isinstance(e.reason, _ProxyError): raise ProxyError(e, request=request) if isinstance(e.reason, _SSLError): # This branch is for urllib3 v1.22 and later. raise SSLError(e, request=request) > raise ConnectionError(e, request=request) E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=XPDRA01/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError ----------------------------- Captured stdout call ----------------------------- execution of test_18_xpdr_device_not_connected _______ TransportPCEPortMappingTesting.test_19_rdm_device_disconnection ________ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: > sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection raise err _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ address = ('localhost', 8182), timeout = 10, source_address = None socket_options = [(6, 1, 1)] def create_connection( address: tuple[str, int], timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, source_address: tuple[str, int] | None = None, socket_options: _TYPE_SOCKET_OPTIONS | None = None, ) -> socket.socket: """Connect to *address* and return the socket object. Convenience function. Connect to *address* (a 2-tuple ``(host, port)``) and return the socket object. Passing the optional *timeout* parameter will set the timeout on the socket instance before attempting to connect. If no *timeout* is supplied, the global default timeout setting returned by :func:`socket.getdefaulttimeout` is used. If *source_address* is set it must be a tuple of (host, port) for the socket to bind as a source address before making the connection. An host of '' or port 0 tells the OS to use the default. """ host, port = address if host.startswith("["): host = host.strip("[]") err = None # Using the value from allowed_gai_family() in the context of getaddrinfo lets # us select whether to work with IPv4 DNS records, IPv6 records, or both. # The original create_connection function always returns all records. family = allowed_gai_family() try: host.encode("idna") except UnicodeError: raise LocationParseError(f"'{host}', label empty or too long") from None for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): af, socktype, proto, canonname, sa = res sock = None try: sock = socket.socket(af, socktype, proto) # If provided, set socket level options before connecting. _set_socket_options(sock, socket_options) if timeout is not _DEFAULT_TIMEOUT: sock.settimeout(timeout) if source_address: sock.bind(source_address) > sock.connect(sa) E ConnectionRefusedError: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError The above exception was the direct cause of the following exception: self = method = 'DELETE' url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01' body = None headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Content-Length': '0', 'Authorization': 'Basic YWRtaW46YWRtaW4='} retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) redirect = False, assert_same_host = False timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None release_conn = False, chunked = False, body_pos = None, preload_content = False decode_content = False, response_kw = {} parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01', query=None, fragment=None) destination_scheme = None, conn = None, release_this_conn = True http_tunnel_required = False, err = None, clean_exit = False def urlopen( # type: ignore[override] self, method: str, url: str, body: _TYPE_BODY | None = None, headers: typing.Mapping[str, str] | None = None, retries: Retry | bool | int | None = None, redirect: bool = True, assert_same_host: bool = True, timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, pool_timeout: int | None = None, release_conn: bool | None = None, chunked: bool = False, body_pos: _TYPE_BODY_POSITION | None = None, preload_content: bool = True, decode_content: bool = True, **response_kw: typing.Any, ) -> BaseHTTPResponse: """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param url: The URL to perform the request on. :param body: Data to send in the request body, either :class:`str`, :class:`bytes`, an iterable of :class:`str`/:class:`bytes`, or a file-like object. :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When ``False``, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param bool preload_content: If True, the response's body will be preloaded into memory. :param bool decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``preload_content`` which defaults to ``True``. :param bool chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. """ parsed_url = parse_url(url) destination_scheme = parsed_url.scheme if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = preload_content # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) # Ensure that the URL we're connecting to is properly encoded if url.startswith("/"): url = to_str(_encode_target(url)) else: url = to_str(parsed_url.url) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn http_tunnel_required = connection_requires_http_tunnel( self.proxy, self.proxy_config, destination_scheme ) # Merge the proxy headers. Only done when not using HTTP CONNECT. We # have to copy the headers dict so we can safely change it without those # changes being reflected in anyone else's copy. if not http_tunnel_required: headers = headers.copy() # type: ignore[attr-defined] headers.update(self.proxy_headers) # type: ignore[union-attr] # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] # Is this a closed/new connection that requires CONNECT tunnelling? if self.proxy is not None and http_tunnel_required and conn.is_closed: try: self._prepare_proxy(conn) except (BaseSSLError, OSError, SocketTimeout) as e: self._raise_timeout( err=e, url=self.proxy.url, timeout_value=conn.timeout ) raise # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Make the request on the HTTPConnection object > response = self._make_request( conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked, retries=retries, response_conn=response_conn, preload_content=preload_content, decode_content=decode_content, **response_kw, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request conn.request( ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request self.endheaders() /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders self._send_output(message_body, encode_chunked=encode_chunked) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output self.send(msg) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send self.connect() ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect self.sock = self._new_conn() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) except socket.gaierror as e: raise NameResolutionError(self.host, self, e) from e except SocketTimeout as e: raise ConnectTimeoutError( self, f"Connection to {self.host} timed out. (connect timeout={self.timeout})", ) from e except OSError as e: > raise NewConnectionError( self, f"Failed to establish a new connection: {e}" ) from e E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError The above exception was the direct cause of the following exception: self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: > resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen retries = retries.increment( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Retry(total=0, connect=None, read=False, redirect=None, status=None) method = 'DELETE' url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01' response = None error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') _pool = _stacktrace = def increment( self, method: str | None = None, url: str | None = None, response: BaseHTTPResponse | None = None, error: Exception | None = None, _pool: ConnectionPool | None = None, _stacktrace: TracebackType | None = None, ) -> Self: """Return a new Retry object with incremented retry counters. :param response: A response object, or None, if the server did not return a response. :type response: :class:`~urllib3.response.BaseHTTPResponse` :param Exception error: An error encountered during the request, or None if the response was received successfully. :return: A new ``Retry`` object. """ if self.total is False and error: # Disabled, indicate to re-raise the error. raise reraise(type(error), error, _stacktrace) total = self.total if total is not None: total -= 1 connect = self.connect read = self.read redirect = self.redirect status_count = self.status other = self.other cause = "unknown" status = None redirect_location = None if error and self._is_connection_error(error): # Connect retry? if connect is False: raise reraise(type(error), error, _stacktrace) elif connect is not None: connect -= 1 elif error and self._is_read_error(error): # Read retry? if read is False or method is None or not self._is_method_retryable(method): raise reraise(type(error), error, _stacktrace) elif read is not None: read -= 1 elif error: # Other retry? if other is not None: other -= 1 elif response and response.get_redirect_location(): # Redirect retry? if redirect is not None: redirect -= 1 cause = "too many redirects" response_redirect_location = response.get_redirect_location() if response_redirect_location: redirect_location = response_redirect_location status = response.status else: # Incrementing because of a server error like a 500 in # status_forcelist and the given method is in the allowed_methods cause = ResponseError.GENERIC_ERROR if response and response.status: if status_count is not None: status_count -= 1 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) status = response.status history = self.history + ( RequestHistory(method, url, error, status, redirect_location), ) new_retry = self.new( total=total, connect=connect, read=read, redirect=redirect, status=status_count, other=other, history=history, ) if new_retry.is_exhausted(): reason = error or ResponseError(cause) > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError During handling of the above exception, another exception occurred: self = def test_19_rdm_device_disconnection(self): > response = test_utils.unmount_device("ROADMA01") transportpce_tests/1.2.1/test01_portmapping.py:211: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ transportpce_tests/common/test_utils.py:358: in unmount_device response = delete_request(url[RESTCONF_VERSION].format('{}', node)) transportpce_tests/common/test_utils.py:133: in delete_request return requests.request( ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request return session.request(method=method, url=url, **kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request resp = self.send(prep, **send_kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send r = adapter.send(request, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) except (ProtocolError, OSError) as err: raise ConnectionError(err, request=request) except MaxRetryError as e: if isinstance(e.reason, ConnectTimeoutError): # TODO: Remove this in 3.0.0: see #2811 if not isinstance(e.reason, NewConnectionError): raise ConnectTimeout(e, request=request) if isinstance(e.reason, ResponseError): raise RetryError(e, request=request) if isinstance(e.reason, _ProxyError): raise ProxyError(e, request=request) if isinstance(e.reason, _SSLError): # This branch is for urllib3 v1.22 and later. raise SSLError(e, request=request) > raise ConnectionError(e, request=request) E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError ----------------------------- Captured stdout call ----------------------------- execution of test_19_rdm_device_disconnection ________ TransportPCEPortMappingTesting.test_20_rdm_device_disconnected ________ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: > sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection raise err _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ address = ('localhost', 8182), timeout = 10, source_address = None socket_options = [(6, 1, 1)] def create_connection( address: tuple[str, int], timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, source_address: tuple[str, int] | None = None, socket_options: _TYPE_SOCKET_OPTIONS | None = None, ) -> socket.socket: """Connect to *address* and return the socket object. Convenience function. Connect to *address* (a 2-tuple ``(host, port)``) and return the socket object. Passing the optional *timeout* parameter will set the timeout on the socket instance before attempting to connect. If no *timeout* is supplied, the global default timeout setting returned by :func:`socket.getdefaulttimeout` is used. If *source_address* is set it must be a tuple of (host, port) for the socket to bind as a source address before making the connection. An host of '' or port 0 tells the OS to use the default. """ host, port = address if host.startswith("["): host = host.strip("[]") err = None # Using the value from allowed_gai_family() in the context of getaddrinfo lets # us select whether to work with IPv4 DNS records, IPv6 records, or both. # The original create_connection function always returns all records. family = allowed_gai_family() try: host.encode("idna") except UnicodeError: raise LocationParseError(f"'{host}', label empty or too long") from None for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): af, socktype, proto, canonname, sa = res sock = None try: sock = socket.socket(af, socktype, proto) # If provided, set socket level options before connecting. _set_socket_options(sock, socket_options) if timeout is not _DEFAULT_TIMEOUT: sock.settimeout(timeout) if source_address: sock.bind(source_address) > sock.connect(sa) E ConnectionRefusedError: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError The above exception was the direct cause of the following exception: self = method = 'GET' url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig' body = None headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) redirect = False, assert_same_host = False timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None release_conn = False, chunked = False, body_pos = None, preload_content = False decode_content = False, response_kw = {} parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01', query='content=nonconfig', fragment=None) destination_scheme = None, conn = None, release_this_conn = True http_tunnel_required = False, err = None, clean_exit = False def urlopen( # type: ignore[override] self, method: str, url: str, body: _TYPE_BODY | None = None, headers: typing.Mapping[str, str] | None = None, retries: Retry | bool | int | None = None, redirect: bool = True, assert_same_host: bool = True, timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, pool_timeout: int | None = None, release_conn: bool | None = None, chunked: bool = False, body_pos: _TYPE_BODY_POSITION | None = None, preload_content: bool = True, decode_content: bool = True, **response_kw: typing.Any, ) -> BaseHTTPResponse: """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param url: The URL to perform the request on. :param body: Data to send in the request body, either :class:`str`, :class:`bytes`, an iterable of :class:`str`/:class:`bytes`, or a file-like object. :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When ``False``, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param bool preload_content: If True, the response's body will be preloaded into memory. :param bool decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``preload_content`` which defaults to ``True``. :param bool chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. """ parsed_url = parse_url(url) destination_scheme = parsed_url.scheme if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = preload_content # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) # Ensure that the URL we're connecting to is properly encoded if url.startswith("/"): url = to_str(_encode_target(url)) else: url = to_str(parsed_url.url) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn http_tunnel_required = connection_requires_http_tunnel( self.proxy, self.proxy_config, destination_scheme ) # Merge the proxy headers. Only done when not using HTTP CONNECT. We # have to copy the headers dict so we can safely change it without those # changes being reflected in anyone else's copy. if not http_tunnel_required: headers = headers.copy() # type: ignore[attr-defined] headers.update(self.proxy_headers) # type: ignore[union-attr] # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] # Is this a closed/new connection that requires CONNECT tunnelling? if self.proxy is not None and http_tunnel_required and conn.is_closed: try: self._prepare_proxy(conn) except (BaseSSLError, OSError, SocketTimeout) as e: self._raise_timeout( err=e, url=self.proxy.url, timeout_value=conn.timeout ) raise # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Make the request on the HTTPConnection object > response = self._make_request( conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked, retries=retries, response_conn=response_conn, preload_content=preload_content, decode_content=decode_content, **response_kw, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request conn.request( ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request self.endheaders() /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders self._send_output(message_body, encode_chunked=encode_chunked) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output self.send(msg) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send self.connect() ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect self.sock = self._new_conn() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) except socket.gaierror as e: raise NameResolutionError(self.host, self, e) from e except SocketTimeout as e: raise ConnectTimeoutError( self, f"Connection to {self.host} timed out. (connect timeout={self.timeout})", ) from e except OSError as e: > raise NewConnectionError( self, f"Failed to establish a new connection: {e}" ) from e E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError The above exception was the direct cause of the following exception: self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: > resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen retries = retries.increment( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Retry(total=0, connect=None, read=False, redirect=None, status=None) method = 'GET' url = '/rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig' response = None error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') _pool = _stacktrace = def increment( self, method: str | None = None, url: str | None = None, response: BaseHTTPResponse | None = None, error: Exception | None = None, _pool: ConnectionPool | None = None, _stacktrace: TracebackType | None = None, ) -> Self: """Return a new Retry object with incremented retry counters. :param response: A response object, or None, if the server did not return a response. :type response: :class:`~urllib3.response.BaseHTTPResponse` :param Exception error: An error encountered during the request, or None if the response was received successfully. :return: A new ``Retry`` object. """ if self.total is False and error: # Disabled, indicate to re-raise the error. raise reraise(type(error), error, _stacktrace) total = self.total if total is not None: total -= 1 connect = self.connect read = self.read redirect = self.redirect status_count = self.status other = self.other cause = "unknown" status = None redirect_location = None if error and self._is_connection_error(error): # Connect retry? if connect is False: raise reraise(type(error), error, _stacktrace) elif connect is not None: connect -= 1 elif error and self._is_read_error(error): # Read retry? if read is False or method is None or not self._is_method_retryable(method): raise reraise(type(error), error, _stacktrace) elif read is not None: read -= 1 elif error: # Other retry? if other is not None: other -= 1 elif response and response.get_redirect_location(): # Redirect retry? if redirect is not None: redirect -= 1 cause = "too many redirects" response_redirect_location = response.get_redirect_location() if response_redirect_location: redirect_location = response_redirect_location status = response.status else: # Incrementing because of a server error like a 500 in # status_forcelist and the given method is in the allowed_methods cause = ResponseError.GENERIC_ERROR if response and response.status: if status_count is not None: status_count -= 1 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) status = response.status history = self.history + ( RequestHistory(method, url, error, status, redirect_location), ) new_retry = self.new( total=total, connect=connect, read=read, redirect=redirect, status=status_count, other=other, history=history, ) if new_retry.is_exhausted(): reason = error or ResponseError(cause) > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError During handling of the above exception, another exception occurred: self = def test_20_rdm_device_disconnected(self): > response = test_utils.check_device_connection("ROADMA01") transportpce_tests/1.2.1/test01_portmapping.py:215: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ transportpce_tests/common/test_utils.py:369: in check_device_connection response = get_request(url[RESTCONF_VERSION].format('{}', node)) transportpce_tests/common/test_utils.py:116: in get_request return requests.request( ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request return session.request(method=method, url=url, **kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request resp = self.send(prep, **send_kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send r = adapter.send(request, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) except (ProtocolError, OSError) as err: raise ConnectionError(err, request=request) except MaxRetryError as e: if isinstance(e.reason, ConnectTimeoutError): # TODO: Remove this in 3.0.0: see #2811 if not isinstance(e.reason, NewConnectionError): raise ConnectTimeout(e, request=request) if isinstance(e.reason, ResponseError): raise RetryError(e, request=request) if isinstance(e.reason, _ProxyError): raise ProxyError(e, request=request) if isinstance(e.reason, _SSLError): # This branch is for urllib3 v1.22 and later. raise SSLError(e, request=request) > raise ConnectionError(e, request=request) E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/network-topology:network-topology/topology=topology-netconf/node=ROADMA01?content=nonconfig (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError ----------------------------- Captured stdout call ----------------------------- execution of test_20_rdm_device_disconnected _______ TransportPCEPortMappingTesting.test_21_rdm_device_not_connected ________ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: > sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:199: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection raise err _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ address = ('localhost', 8182), timeout = 10, source_address = None socket_options = [(6, 1, 1)] def create_connection( address: tuple[str, int], timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, source_address: tuple[str, int] | None = None, socket_options: _TYPE_SOCKET_OPTIONS | None = None, ) -> socket.socket: """Connect to *address* and return the socket object. Convenience function. Connect to *address* (a 2-tuple ``(host, port)``) and return the socket object. Passing the optional *timeout* parameter will set the timeout on the socket instance before attempting to connect. If no *timeout* is supplied, the global default timeout setting returned by :func:`socket.getdefaulttimeout` is used. If *source_address* is set it must be a tuple of (host, port) for the socket to bind as a source address before making the connection. An host of '' or port 0 tells the OS to use the default. """ host, port = address if host.startswith("["): host = host.strip("[]") err = None # Using the value from allowed_gai_family() in the context of getaddrinfo lets # us select whether to work with IPv4 DNS records, IPv6 records, or both. # The original create_connection function always returns all records. family = allowed_gai_family() try: host.encode("idna") except UnicodeError: raise LocationParseError(f"'{host}', label empty or too long") from None for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): af, socktype, proto, canonname, sa = res sock = None try: sock = socket.socket(af, socktype, proto) # If provided, set socket level options before connecting. _set_socket_options(sock, socket_options) if timeout is not _DEFAULT_TIMEOUT: sock.settimeout(timeout) if source_address: sock.bind(source_address) > sock.connect(sa) E ConnectionRefusedError: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError The above exception was the direct cause of the following exception: self = method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info' body = None headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Authorization': 'Basic YWRtaW46YWRtaW4='} retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) redirect = False, assert_same_host = False timeout = Timeout(connect=10, read=10, total=None), pool_timeout = None release_conn = False, chunked = False, body_pos = None, preload_content = False decode_content = False, response_kw = {} parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info', query=None, fragment=None) destination_scheme = None, conn = None, release_this_conn = True http_tunnel_required = False, err = None, clean_exit = False def urlopen( # type: ignore[override] self, method: str, url: str, body: _TYPE_BODY | None = None, headers: typing.Mapping[str, str] | None = None, retries: Retry | bool | int | None = None, redirect: bool = True, assert_same_host: bool = True, timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, pool_timeout: int | None = None, release_conn: bool | None = None, chunked: bool = False, body_pos: _TYPE_BODY_POSITION | None = None, preload_content: bool = True, decode_content: bool = True, **response_kw: typing.Any, ) -> BaseHTTPResponse: """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param url: The URL to perform the request on. :param body: Data to send in the request body, either :class:`str`, :class:`bytes`, an iterable of :class:`str`/:class:`bytes`, or a file-like object. :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When ``False``, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param bool preload_content: If True, the response's body will be preloaded into memory. :param bool decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``preload_content`` which defaults to ``True``. :param bool chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. """ parsed_url = parse_url(url) destination_scheme = parsed_url.scheme if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = preload_content # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) # Ensure that the URL we're connecting to is properly encoded if url.startswith("/"): url = to_str(_encode_target(url)) else: url = to_str(parsed_url.url) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] release_this_conn = release_conn http_tunnel_required = connection_requires_http_tunnel( self.proxy, self.proxy_config, destination_scheme ) # Merge the proxy headers. Only done when not using HTTP CONNECT. We # have to copy the headers dict so we can safely change it without those # changes being reflected in anyone else's copy. if not http_tunnel_required: headers = headers.copy() # type: ignore[attr-defined] headers.update(self.proxy_headers) # type: ignore[union-attr] # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] # Is this a closed/new connection that requires CONNECT tunnelling? if self.proxy is not None and http_tunnel_required and conn.is_closed: try: self._prepare_proxy(conn) except (BaseSSLError, OSError, SocketTimeout) as e: self._raise_timeout( err=e, url=self.proxy.url, timeout_value=conn.timeout ) raise # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Make the request on the HTTPConnection object > response = self._make_request( conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked, retries=retries, response_conn=response_conn, preload_content=preload_content, decode_content=decode_content, **response_kw, ) ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:789: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:495: in _make_request conn.request( ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:441: in request self.endheaders() /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1289: in endheaders self._send_output(message_body, encode_chunked=encode_chunked) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:1048: in _send_output self.send(msg) /opt/pyenv/versions/3.11.7/lib/python3.11/http/client.py:986: in send self.connect() ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:279: in connect self.sock = self._new_conn() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def _new_conn(self) -> socket.socket: """Establish a socket connection and set nodelay settings on it. :return: New socket connection. """ try: sock = connection.create_connection( (self._dns_host, self.port), self.timeout, source_address=self.source_address, socket_options=self.socket_options, ) except socket.gaierror as e: raise NameResolutionError(self.host, self, e) from e except SocketTimeout as e: raise ConnectTimeoutError( self, f"Connection to {self.host} timed out. (connect timeout={self.timeout})", ) from e except OSError as e: > raise NewConnectionError( self, f"Failed to establish a new connection: {e}" ) from e E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused ../.tox/tests121/lib/python3.11/site-packages/urllib3/connection.py:214: NewConnectionError The above exception was the direct cause of the following exception: self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: > resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:667: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.tox/tests121/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen retries = retries.increment( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Retry(total=0, connect=None, read=False, redirect=None, status=None) method = 'GET' url = '/rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info' response = None error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') _pool = _stacktrace = def increment( self, method: str | None = None, url: str | None = None, response: BaseHTTPResponse | None = None, error: Exception | None = None, _pool: ConnectionPool | None = None, _stacktrace: TracebackType | None = None, ) -> Self: """Return a new Retry object with incremented retry counters. :param response: A response object, or None, if the server did not return a response. :type response: :class:`~urllib3.response.BaseHTTPResponse` :param Exception error: An error encountered during the request, or None if the response was received successfully. :return: A new ``Retry`` object. """ if self.total is False and error: # Disabled, indicate to re-raise the error. raise reraise(type(error), error, _stacktrace) total = self.total if total is not None: total -= 1 connect = self.connect read = self.read redirect = self.redirect status_count = self.status other = self.other cause = "unknown" status = None redirect_location = None if error and self._is_connection_error(error): # Connect retry? if connect is False: raise reraise(type(error), error, _stacktrace) elif connect is not None: connect -= 1 elif error and self._is_read_error(error): # Read retry? if read is False or method is None or not self._is_method_retryable(method): raise reraise(type(error), error, _stacktrace) elif read is not None: read -= 1 elif error: # Other retry? if other is not None: other -= 1 elif response and response.get_redirect_location(): # Redirect retry? if redirect is not None: redirect -= 1 cause = "too many redirects" response_redirect_location = response.get_redirect_location() if response_redirect_location: redirect_location = response_redirect_location status = response.status else: # Incrementing because of a server error like a 500 in # status_forcelist and the given method is in the allowed_methods cause = ResponseError.GENERIC_ERROR if response and response.status: if status_count is not None: status_count -= 1 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) status = response.status history = self.history + ( RequestHistory(method, url, error, status, redirect_location), ) new_retry = self.new( total=total, connect=connect, read=read, redirect=redirect, status=status_count, other=other, history=history, ) if new_retry.is_exhausted(): reason = error or ResponseError(cause) > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError During handling of the above exception, another exception occurred: self = def test_21_rdm_device_not_connected(self): > response = test_utils.get_portmapping_node_attr("ROADMA01", "node-info", None) transportpce_tests/1.2.1/test01_portmapping.py:223: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ transportpce_tests/common/test_utils.py:470: in get_portmapping_node_attr response = get_request(target_url) transportpce_tests/common/test_utils.py:116: in get_request return requests.request( ../.tox/tests121/lib/python3.11/site-packages/requests/api.py:59: in request return session.request(method=method, url=url, **kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:589: in request resp = self.send(prep, **send_kwargs) ../.tox/tests121/lib/python3.11/site-packages/requests/sessions.py:703: in send r = adapter.send(request, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = request = , stream = False timeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest ` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) except (ProtocolError, OSError) as err: raise ConnectionError(err, request=request) except MaxRetryError as e: if isinstance(e.reason, ConnectTimeoutError): # TODO: Remove this in 3.0.0: see #2811 if not isinstance(e.reason, NewConnectionError): raise ConnectTimeout(e, request=request) if isinstance(e.reason, ResponseError): raise RetryError(e, request=request) if isinstance(e.reason, _ProxyError): raise ProxyError(e, request=request) if isinstance(e.reason, _SSLError): # This branch is for urllib3 v1.22 and later. raise SSLError(e, request=request) > raise ConnectionError(e, request=request) E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /rests/data/transportpce-portmapping:network/nodes=ROADMA01/node-info (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) ../.tox/tests121/lib/python3.11/site-packages/requests/adapters.py:700: ConnectionError ----------------------------- Captured stdout call ----------------------------- execution of test_21_rdm_device_not_connected --------------------------- Captured stdout teardown --------------------------- all processes killed =========================== short test summary info ============================ FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_02_rdm_device_connected FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_03_rdm_portmapping_info FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_04_rdm_portmapping_DEG1_TTP_TXRX FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_05_rdm_portmapping_SRG1_PP7_TXRX FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_06_rdm_portmapping_SRG3_PP1_TXRX FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_11_xpdr_portmapping_NETWORK2 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_12_xpdr_portmapping_CLIENT1 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_13_xpdr_portmapping_CLIENT2 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_14_xpdr_portmapping_CLIENT3 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_15_xpdr_portmapping_CLIENT4 FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_16_xpdr_device_disconnection FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_17_xpdr_device_disconnected FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_18_xpdr_device_not_connected FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_19_rdm_device_disconnection FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_20_rdm_device_disconnected FAILED transportpce_tests/1.2.1/test01_portmapping.py::TransportPCEPortMappingTesting::test_21_rdm_device_not_connected 16 failed, 5 passed in 430.97s (0:07:10) tests121: exit 1 (431.22 seconds) /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 1.2.1 pid=35546 ............ [100%] 12 passed in 43.41s pytest -q transportpce_tests/7.1/test02_otn_renderer.py .............................................................. [100%] 62 passed in 154.68s (0:02:34) pytest -q transportpce_tests/7.1/test03_renderer_or_modes.py ................................................ [100%] 48 passed in 136.04s (0:02:16) pytest -q transportpce_tests/7.1/test04_renderer_regen_mode.py ...................... [100%] 22 passed in 73.30s (0:01:13) tests121: FAIL ✖ in 7 minutes 17.19 seconds tests71: OK ✔ in 6 minutes 54.02 seconds tests221: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt tests221: freeze> python -m pip freeze --all tests221: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 tests221: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh 2.2.1 using environment variables from ./karaf221.env pytest -q transportpce_tests/2.2.1/test01_portmapping.py ................................... [100%] 35 passed in 75.72s (0:01:15) pytest -q transportpce_tests/2.2.1/test02_topo_portmapping.py ...... [100%] 6 passed in 45.27s pytest -q transportpce_tests/2.2.1/test03_topology.py ............................................ [100%] 44 passed in 136.86s (0:02:16) pytest -q transportpce_tests/2.2.1/test04_otn_topology.py ............ [100%] 12 passed in 58.29s pytest -q transportpce_tests/2.2.1/test05_flex_grid.py ................ [100%] 16 passed in 113.83s (0:01:53) pytest -q transportpce_tests/2.2.1/test06_renderer_service_path_nominal.py ............................... [100%] 31 passed in 34.66s pytest -q transportpce_tests/2.2.1/test07_otn_renderer.py .......................... [100%] 26 passed in 89.87s (0:01:29) pytest -q transportpce_tests/2.2.1/test08_otn_sh_renderer.py ...................... [100%] 22 passed in 99.39s (0:01:39) pytest -q transportpce_tests/2.2.1/test09_olm.py ........................................ [100%] 40 passed in 183.57s (0:03:03) pytest -q transportpce_tests/2.2.1/test11_otn_end2end.py ........................................................................ [ 74%] ......................... [100%] 97 passed in 492.99s (0:08:12) pytest -q transportpce_tests/2.2.1/test12_end2end.py ...................................................... [100%] 54 passed in 626.09s (0:10:26) pytest -q transportpce_tests/2.2.1/test14_otn_switch_end2end.py ........................................................................ [ 71%] ............................. [100%] 101 passed in 489.93s (0:08:09) pytest -q transportpce_tests/2.2.1/test15_otn_end2end_with_intermediate_switch.py ........................................................................ [ 67%] ................................... [100%] 107 passed in 600.28s (0:10:00) tests221: OK ✔ in 50 minutes 56.25 seconds tests_hybrid: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt tests_hybrid: freeze> python -m pip freeze --all tests_hybrid: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 tests_hybrid: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/tests> ./launch_tests.sh hybrid using environment variables from ./karaf121.env pytest -q transportpce_tests/hybrid/test01_device_change_notifications.py ................................................... [100%] 51 passed in 150.08s (0:02:30) pytest -q transportpce_tests/hybrid/test02_B100G_end2end.py ........................................................................ [ 66%] ..................................... [100%] 109 passed in 426.42s (0:07:06) pytest -q transportpce_tests/hybrid/test03_autonomous_reroute.py ..................................................... [100%] 53 passed in 617.38s (0:10:17) tests_hybrid: OK ✔ in 20 minutes 0.71 seconds buildlighty: install_deps> python -I -m pip install 'setuptools>=7.0' -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/requirements.txt -r /w/workspace/transportpce-tox-verify-transportpce-master/tests/test-requirements.txt buildlighty: freeze> python -m pip freeze --all buildlighty: bcrypt==4.2.0,certifi==2024.8.30,cffi==1.17.1,charset-normalizer==3.3.2,cryptography==43.0.1,dict2xml==1.7.6,idna==3.10,iniconfig==2.0.0,lxml==5.3.0,netconf-client==3.1.1,packaging==24.1,paramiko==3.5.0,pip==24.2,pluggy==1.5.0,psutil==6.0.0,pycparser==2.22,PyNaCl==1.5.0,pytest==8.3.3,requests==2.32.3,setuptools==75.1.0,urllib3==2.2.3,wheel==0.44.0 buildlighty: commands[0] /w/workspace/transportpce-tox-verify-transportpce-master/lighty> ./build.sh NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED [ERROR] COMPILATION ERROR : [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[17,42] cannot find symbol symbol: class YangModuleInfo location: package org.opendaylight.yangtools.binding [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[21,30] cannot find symbol symbol: class YangModuleInfo location: class io.lighty.controllers.tpce.utils.TPCEUtils [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[343,30] cannot find symbol symbol: class YangModuleInfo location: class io.lighty.controllers.tpce.utils.TPCEUtils [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[350,23] cannot find symbol symbol: class YangModuleInfo location: class io.lighty.controllers.tpce.utils.TPCEUtils [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.13.0:compile (default-compile) on project tpce: Compilation failure: Compilation failure: [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[17,42] cannot find symbol [ERROR] symbol: class YangModuleInfo [ERROR] location: package org.opendaylight.yangtools.binding [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[21,30] cannot find symbol [ERROR] symbol: class YangModuleInfo [ERROR] location: class io.lighty.controllers.tpce.utils.TPCEUtils [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[343,30] cannot find symbol [ERROR] symbol: class YangModuleInfo [ERROR] location: class io.lighty.controllers.tpce.utils.TPCEUtils [ERROR] /w/workspace/transportpce-tox-verify-transportpce-master/lighty/src/main/java/io/lighty/controllers/tpce/utils/TPCEUtils.java:[350,23] cannot find symbol [ERROR] symbol: class YangModuleInfo [ERROR] location: class io.lighty.controllers.tpce.utils.TPCEUtils [ERROR] -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException unzip: cannot find or open target/tpce-bin.zip, target/tpce-bin.zip.zip or target/tpce-bin.zip.ZIP. buildlighty: exit 9 (16.42 seconds) /w/workspace/transportpce-tox-verify-transportpce-master/lighty> ./build.sh pid=60295 buildlighty: command failed but is marked ignore outcome so handling it as success buildcontroller: OK (132.36=setup[10.53]+cmd[121.83] seconds) testsPCE: OK (374.77=setup[126.99]+cmd[247.79] seconds) sims: OK (13.25=setup[9.88]+cmd[3.37] seconds) build_karaf_tests121: OK (65.94=setup[9.85]+cmd[56.09] seconds) tests121: FAIL code 1 (437.18=setup[5.97]+cmd[431.22] seconds) build_karaf_tests221: OK (64.21=setup[9.60]+cmd[54.62] seconds) tests_tapi: FAIL code 1 (732.60=setup[14.01]+cmd[718.59] seconds) tests221: OK (3056.25=setup[6.51]+cmd[3049.74] seconds) build_karaf_tests71: OK (86.33=setup[22.63]+cmd[63.70] seconds) tests71: OK (414.02=setup[5.68]+cmd[408.35] seconds) build_karaf_tests_hybrid: OK (69.46=setup[15.24]+cmd[54.21] seconds) tests_hybrid: OK (1200.71=setup[6.16]+cmd[1194.56] seconds) buildlighty: OK (22.07=setup[5.65]+cmd[16.42] seconds) docs: OK (47.09=setup[42.71]+cmd[4.38] seconds) docs-linkcheck: OK (50.10=setup[46.27]+cmd[3.84] seconds) checkbashisms: OK (3.31=setup[2.01]+cmd[0.23,0.15,0.92] seconds) pre-commit: OK (53.02=setup[4.35]+cmd[0.01,0.01,40.50,8.15] seconds) pylint: OK (26.78=setup[5.11]+cmd[21.67] seconds) evaluation failed :( (5624.04 seconds) + tox_status=255 + echo '---> Completed tox runs' ---> Completed tox runs + for i in .tox/*/log ++ echo .tox/build_karaf_tests121/log ++ awk -F/ '{print $2}' + tox_env=build_karaf_tests121 + cp -r .tox/build_karaf_tests121/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests121 + for i in .tox/*/log ++ echo .tox/build_karaf_tests221/log ++ awk -F/ '{print $2}' + tox_env=build_karaf_tests221 + cp -r .tox/build_karaf_tests221/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests221 + for i in .tox/*/log ++ echo .tox/build_karaf_tests71/log ++ awk -F/ '{print $2}' + tox_env=build_karaf_tests71 + cp -r .tox/build_karaf_tests71/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests71 + for i in .tox/*/log ++ echo .tox/build_karaf_tests_hybrid/log ++ awk -F/ '{print $2}' + tox_env=build_karaf_tests_hybrid + cp -r .tox/build_karaf_tests_hybrid/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/build_karaf_tests_hybrid + for i in .tox/*/log ++ echo .tox/buildcontroller/log ++ awk -F/ '{print $2}' + tox_env=buildcontroller + cp -r .tox/buildcontroller/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/buildcontroller + for i in .tox/*/log ++ awk -F/ '{print $2}' ++ echo .tox/buildlighty/log + tox_env=buildlighty + cp -r .tox/buildlighty/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/buildlighty + for i in .tox/*/log ++ echo .tox/checkbashisms/log ++ awk -F/ '{print $2}' + tox_env=checkbashisms + cp -r .tox/checkbashisms/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/checkbashisms + for i in .tox/*/log ++ echo .tox/docs-linkcheck/log ++ awk -F/ '{print $2}' + tox_env=docs-linkcheck + cp -r .tox/docs-linkcheck/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/docs-linkcheck + for i in .tox/*/log ++ echo .tox/docs/log ++ awk -F/ '{print $2}' + tox_env=docs + cp -r .tox/docs/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/docs + for i in .tox/*/log ++ echo .tox/pre-commit/log ++ awk -F/ '{print $2}' + tox_env=pre-commit + cp -r .tox/pre-commit/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/pre-commit + for i in .tox/*/log ++ echo .tox/pylint/log ++ awk -F/ '{print $2}' + tox_env=pylint + cp -r .tox/pylint/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/pylint + for i in .tox/*/log ++ echo .tox/sims/log ++ awk -F/ '{print $2}' + tox_env=sims + cp -r .tox/sims/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/sims + for i in .tox/*/log ++ echo .tox/tests121/log ++ awk -F/ '{print $2}' + tox_env=tests121 + cp -r .tox/tests121/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests121 + for i in .tox/*/log ++ echo .tox/tests221/log ++ awk -F/ '{print $2}' + tox_env=tests221 + cp -r .tox/tests221/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests221 + for i in .tox/*/log ++ echo .tox/tests71/log ++ awk -F/ '{print $2}' + tox_env=tests71 + cp -r .tox/tests71/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests71 + for i in .tox/*/log ++ awk -F/ '{print $2}' ++ echo .tox/testsPCE/log + tox_env=testsPCE + cp -r .tox/testsPCE/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/testsPCE + for i in .tox/*/log ++ echo .tox/tests_hybrid/log ++ awk -F/ '{print $2}' + tox_env=tests_hybrid + cp -r .tox/tests_hybrid/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests_hybrid + for i in .tox/*/log ++ echo .tox/tests_tapi/log ++ awk -F/ '{print $2}' + tox_env=tests_tapi + cp -r .tox/tests_tapi/log /w/workspace/transportpce-tox-verify-transportpce-master/archives/tox/tests_tapi + DOC_DIR=docs/_build/html + [[ -d docs/_build/html ]] + echo '---> Archiving generated docs' ---> Archiving generated docs + mv docs/_build/html /w/workspace/transportpce-tox-verify-transportpce-master/archives/docs + echo '---> tox-run.sh ends' ---> tox-run.sh ends + test 255 -eq 0 + exit 255 ++ '[' 1 = 1 ']' ++ '[' -x /usr/bin/clear_console ']' ++ /usr/bin/clear_console -q Build step 'Execute shell' marked build as failure $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 12175 killed; [ssh-agent] Stopped. [PostBuildScript] - [INFO] Executing post build scripts. [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins15556480802321636024.sh ---> sysstat.sh [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins18175788406101670306.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/transportpce-tox-verify-transportpce-master + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/transportpce-tox-verify-transportpce-master ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/transportpce-tox-verify-transportpce-master ']' + mkdir -p /w/workspace/transportpce-tox-verify-transportpce-master/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/transportpce-tox-verify-transportpce-master/archives/ [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins8349848452000837309.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-9VJz from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-9VJz/bin to PATH INFO: Running in OpenStack, capturing instance metadata [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins5724714492598377822.sh provisioning config files... Could not find credentials [logs] for transportpce-tox-verify-transportpce-master #2024 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/transportpce-tox-verify-transportpce-master@tmp/config9180197096935895971tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[odl-logs-s3-cloudfront-index] Run condition [Regular expression match] enabling perform for step [Provide Configuration files] provisioning config files... copy managed file [jenkins-s3-log-ship] to file:/home/jenkins/.aws/credentials [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins4711473621463534540.sh ---> create-netrc.sh WARN: Log server credential not found. [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins13916053668457419015.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-9VJz from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-9VJz/bin to PATH [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins17304583257840013102.sh ---> sudo-logs.sh Archiving 'sudo' log.. [transportpce-tox-verify-transportpce-master] $ /bin/bash /tmp/jenkins9528334566678044068.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-9VJz from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-9VJz/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-4 INFO: Archiving Costs [transportpce-tox-verify-transportpce-master] $ /bin/bash -l /tmp/jenkins16829533375943724258.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 3.10.13 * 3.11.7 (set by /w/workspace/transportpce-tox-verify-transportpce-master/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-9VJz from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-9VJz/bin to PATH WARNING: Nexus logging server not set INFO: S3 path logs/releng/vex-yul-odl-jenkins-1/transportpce-tox-verify-transportpce-master/2024/ INFO: archiving logs to S3 ---> uname -a: Linux prd-ubuntu2004-docker-4c-16g-28080 5.4.0-190-generic #210-Ubuntu SMP Fri Jul 5 17:03:38 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 40 bits physical, 48 bits virtual CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 4 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 128 KiB L1i cache: 128 KiB L2 cache: 2 MiB L3 cache: 64 MiB NUMA node0 CPU(s): 0-3 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Vulnerable Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr wbnoinvd arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 4 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 7.8G 0 7.8G 0% /dev tmpfs 1.6G 1.1M 1.6G 1% /run /dev/vda1 78G 17G 62G 21% / tmpfs 7.9G 0 7.9G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup /dev/loop0 62M 62M 0 100% /snap/core20/1405 /dev/loop1 68M 68M 0 100% /snap/lxd/22753 /dev/vda15 105M 6.1M 99M 6% /boot/efi tmpfs 1.6G 0 1.6G 0% /run/user/1001 /dev/loop3 39M 39M 0 100% /snap/snapd/21759 /dev/loop4 64M 64M 0 100% /snap/core20/2379 /dev/loop5 92M 92M 0 100% /snap/lxd/29619 ---> free -m: total used free shared buff/cache available Mem: 15997 657 6934 0 8404 15000 Swap: 1023 1 1022 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:9e:91:9e brd ff:ff:ff:ff:ff:ff inet 10.30.171.68/23 brd 10.30.171.255 scope global dynamic ens3 valid_lft 80581sec preferred_lft 80581sec inet6 fe80::f816:3eff:fe9e:919e/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1458 qdisc noqueue state DOWN group default link/ether 02:42:a3:dc:36:f5 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 5.4.0-190-generic (prd-ubuntu2004-docker-4c-16g-28080) 09/26/24 _x86_64_ (4 CPU) 08:57:36 LINUX RESTART (4 CPU) 08:58:01 tps rtps wtps dtps bread/s bwrtn/s bdscd/s 08:59:01 276.64 134.43 142.21 0.00 11254.12 68299.68 0.00 09:00:01 79.35 19.18 60.17 0.00 269.96 9705.33 0.00 09:01:01 206.40 36.25 170.15 0.00 2511.28 31849.81 0.00 09:02:01 90.95 6.12 84.84 0.00 272.49 34841.93 0.00 09:03:01 106.25 1.54 104.71 0.00 117.33 71519.31 0.00 09:04:01 155.83 11.98 143.85 0.00 4736.29 110305.36 0.00 09:05:01 110.20 3.45 106.75 0.00 125.98 49335.78 0.00 09:06:01 43.85 1.03 42.82 0.00 53.18 6985.67 0.00 09:07:01 100.85 2.78 98.07 0.00 210.90 1895.82 0.00 09:08:01 68.91 0.23 68.67 0.00 13.06 1110.48 0.00 09:09:01 123.78 2.78 120.99 0.00 465.58 10079.17 0.00 09:10:01 2.85 0.03 2.82 0.00 0.40 49.33 0.00 09:11:01 2.33 0.23 2.10 0.00 18.40 38.53 0.00 09:12:01 104.28 0.25 104.03 0.00 5.33 1856.49 0.00 09:13:01 4.63 0.98 3.65 0.00 22.66 187.97 0.00 09:14:02 3.17 0.00 3.17 0.00 0.00 54.12 0.00 09:15:01 2.34 0.00 2.34 0.00 0.00 51.79 0.00 09:16:01 35.68 0.08 35.59 0.00 2.13 1703.98 0.00 09:17:01 129.23 0.03 129.20 0.00 0.67 9420.16 0.00 09:18:01 2.93 0.03 2.90 0.00 0.53 50.26 0.00 09:19:01 15.53 0.00 15.53 0.00 0.00 344.21 0.00 09:20:01 53.94 0.00 53.94 0.00 0.00 792.80 0.00 09:21:01 1.75 0.00 1.75 0.00 0.00 32.13 0.00 09:22:01 60.29 0.02 60.27 0.00 0.13 891.18 0.00 09:23:01 31.62 0.00 31.62 0.00 0.00 1398.47 0.00 09:24:01 65.76 0.00 65.76 0.00 0.00 2378.14 0.00 09:25:01 71.24 0.00 71.24 0.00 0.00 1068.09 0.00 09:26:01 53.19 0.00 53.19 0.00 0.00 761.34 0.00 09:27:01 15.70 0.00 15.70 0.00 0.00 261.69 0.00 09:28:01 89.22 0.00 89.22 0.00 0.00 1296.77 0.00 09:29:01 50.56 0.00 50.56 0.00 0.00 720.28 0.00 09:30:01 22.90 0.00 22.90 0.00 0.00 363.27 0.00 09:31:01 123.47 0.02 123.45 0.00 0.13 1798.67 0.00 09:32:01 16.26 0.00 16.26 0.00 0.00 274.75 0.00 09:33:01 56.33 0.00 56.33 0.00 0.00 807.20 0.00 09:34:01 17.58 0.00 17.58 0.00 0.00 297.28 0.00 09:35:01 54.66 0.00 54.66 0.00 0.00 783.20 0.00 09:36:01 3.85 0.00 3.85 0.00 0.00 70.25 0.00 09:37:01 17.35 0.00 17.35 0.00 0.00 301.02 0.00 09:38:01 53.27 0.00 53.27 0.00 0.00 776.54 0.00 09:39:01 1.55 0.00 1.55 0.00 0.00 31.19 0.00 09:40:01 1.55 0.00 1.55 0.00 0.00 22.66 0.00 09:41:01 1.97 0.00 1.97 0.00 0.00 39.59 0.00 09:42:01 1.82 0.00 1.82 0.00 0.00 30.13 0.00 09:43:01 1.80 0.00 1.80 0.00 0.00 43.06 0.00 09:44:01 1.90 0.00 1.90 0.00 0.00 30.66 0.00 09:45:01 17.18 0.00 17.18 0.00 0.00 293.82 0.00 09:46:01 46.93 0.00 46.93 0.00 0.00 698.68 0.00 09:47:01 2.87 0.00 2.87 0.00 0.00 58.66 0.00 09:48:01 4.07 0.00 4.07 0.00 0.00 79.32 0.00 09:49:01 3.25 0.00 3.25 0.00 0.00 65.06 0.00 09:50:01 3.43 0.00 3.43 0.00 0.00 66.52 0.00 09:51:01 2.48 0.00 2.48 0.00 0.00 51.72 0.00 09:52:01 3.17 0.00 3.17 0.00 0.00 75.05 0.00 09:53:01 1.85 0.00 1.85 0.00 0.00 24.00 0.00 09:54:01 1.42 0.00 1.42 0.00 0.00 18.13 0.00 09:55:01 1.60 0.00 1.60 0.00 0.00 20.13 0.00 09:56:01 73.17 0.00 73.17 0.00 0.00 1080.75 0.00 09:57:01 2.60 0.00 2.60 0.00 0.00 58.39 0.00 09:58:01 2.45 0.00 2.45 0.00 0.00 55.72 0.00 09:59:01 2.53 0.00 2.53 0.00 0.00 56.12 0.00 10:00:01 2.98 0.00 2.98 0.00 0.00 47.99 0.00 10:01:01 2.95 0.00 2.95 0.00 0.00 54.39 0.00 10:02:01 1.62 0.00 1.62 0.00 0.00 32.93 0.00 10:03:01 2.20 0.00 2.20 0.00 0.00 37.86 0.00 10:04:01 26.13 0.00 26.13 0.00 0.00 429.13 0.00 10:05:01 61.17 0.00 61.17 0.00 0.00 870.92 0.00 10:06:01 1.87 0.00 1.87 0.00 0.00 44.26 0.00 10:07:01 2.42 0.00 2.42 0.00 0.00 56.12 0.00 10:08:01 2.30 0.00 2.30 0.00 0.00 37.33 0.00 10:09:01 2.63 0.00 2.63 0.00 0.00 43.33 0.00 10:10:01 2.37 0.00 2.37 0.00 0.00 38.93 0.00 10:11:01 2.12 0.00 2.12 0.00 0.00 40.53 0.00 10:12:01 1.98 0.00 1.98 0.00 0.00 32.39 0.00 10:13:01 2.15 0.00 2.15 0.00 0.00 48.66 0.00 10:14:01 54.56 0.03 54.52 0.00 0.27 4281.69 0.00 10:15:01 54.07 0.00 54.07 0.00 0.00 5744.67 0.00 10:16:01 15.70 0.00 15.70 0.00 0.00 270.89 0.00 10:17:01 68.41 0.00 68.41 0.00 0.00 1086.84 0.00 10:18:01 3.17 0.00 3.17 0.00 0.00 51.72 0.00 10:19:01 1.87 0.00 1.87 0.00 0.00 33.33 0.00 10:20:01 2.48 0.00 2.48 0.00 0.00 41.86 0.00 10:21:01 2.07 0.00 2.07 0.00 0.00 34.27 0.00 10:22:01 1.72 0.00 1.72 0.00 0.00 41.72 0.00 10:23:01 2.33 0.00 2.33 0.00 0.00 50.39 0.00 10:24:01 65.17 0.00 65.17 0.00 0.00 1070.89 0.00 10:25:01 2.98 0.00 2.98 0.00 0.00 164.24 0.00 10:26:01 3.57 0.00 3.57 0.00 0.00 92.12 0.00 10:27:01 2.45 0.00 2.45 0.00 0.00 64.39 0.00 10:28:01 2.75 0.00 2.75 0.00 0.00 34.66 0.00 10:29:01 1.53 0.00 1.53 0.00 0.00 18.93 0.00 10:30:01 2.25 0.00 2.25 0.00 0.00 28.26 0.00 10:31:01 1.52 0.00 1.52 0.00 0.00 20.93 0.00 10:32:01 2.18 0.00 2.18 0.00 0.00 25.33 0.00 10:33:01 1.65 0.00 1.65 0.00 0.00 21.33 0.00 10:34:01 37.81 11.13 26.68 0.00 394.60 2968.97 0.00 Average: 32.82 2.42 30.40 0.00 213.33 4648.37 0.00 08:58:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 08:59:01 13436748 15378628 611064 3.73 57500 2090240 1363264 7.82 854508 1831324 99076 09:00:01 13202508 15455732 512264 3.13 80652 2356744 1374552 7.89 805768 2073864 192720 09:01:01 10771824 14508888 1448812 8.84 135576 3672596 2213128 12.70 1907828 3275884 837316 09:02:01 9521864 14008948 1949784 11.90 151864 4371272 2471336 14.18 2504176 3893908 561972 09:03:01 6575736 13077548 2873988 17.54 180868 6271328 3661368 21.01 3964036 5283560 343108 09:04:01 4932208 13713200 2231816 13.62 214772 8426068 3391248 19.46 4213656 6579920 452652 09:05:01 4590464 14048300 1896624 11.58 225716 9059904 3044784 17.47 4102556 7000628 161900 09:06:01 153640 9471772 6469756 39.49 221196 8931236 7965748 45.70 8647804 6879080 132 09:07:01 209976 8859736 7081392 43.23 219188 8272588 8107656 46.52 9194864 6278088 1268 09:08:01 1002692 9546348 6394980 39.04 221804 8165160 7742704 44.42 8504748 6181748 596 09:09:01 165268 7923368 8017264 48.94 231928 7378316 9509932 54.56 10073556 5453508 196 09:10:01 171436 7574848 8365672 51.07 231960 7030372 9591700 55.03 10394504 5132668 208 09:11:01 5774732 13178780 2763976 16.87 232032 7030768 3891524 22.33 4819692 5124488 460 09:12:01 2649540 10057076 5884836 35.92 234832 7031400 7224520 41.45 7938440 5121996 600 09:13:01 1974148 9383204 6558056 40.03 234940 7032772 7530428 43.20 8610160 5120096 436 09:14:02 1973248 9382544 6559028 40.04 234972 7032988 7512624 43.10 8612716 5120232 324 09:15:01 1952812 9362480 6578928 40.16 235012 7033348 7529592 43.20 8631800 5120520 208 09:16:01 6147756 13796604 2147912 13.11 241788 7259236 2967876 17.03 4285696 5284896 224712 09:17:01 6126840 13781032 2163448 13.21 245860 7260376 2988700 17.15 4331812 5258420 404 09:18:01 6101492 13755900 2188380 13.36 245896 7260540 2988700 17.15 4357156 5258472 144 09:19:01 7347768 15005388 939748 5.74 245948 7263660 1845836 10.59 3124456 5250380 344 09:20:01 5370308 13026840 2917136 17.81 247516 7260964 3721892 21.35 5096272 5247536 116 09:21:01 5336108 12992904 2950988 18.01 247540 7261204 3721892 21.35 5129904 5247768 204 09:22:01 6183320 13841528 2102792 12.84 248716 7261408 2927208 16.79 4286452 5247452 172 09:23:01 5257608 12982536 2961212 18.08 251344 7321544 4202904 24.11 5146840 5306268 42908 09:24:01 6805480 14531800 1412520 8.62 252384 7321836 2746420 15.76 3607780 5301416 540 09:25:01 5318792 13047244 2896380 17.68 253948 7322320 4213304 24.17 5096296 5293532 152 09:26:01 3946056 11675552 4267456 26.05 254700 7322576 5117932 29.36 6464932 5293784 272 09:27:01 6968764 14698320 1246292 7.61 254716 7322616 2069180 11.87 3455720 5292588 480 09:28:01 6942160 14673332 1271144 7.76 255944 7322944 2094256 12.02 3481812 5292856 404 09:29:01 5889972 13622540 2321320 14.17 257088 7323164 3165284 18.16 4529716 5293068 116 09:30:01 6209416 13942384 2001380 12.22 257196 7323440 3469588 19.91 4209860 5292888 644 09:31:01 6039360 13774520 2169332 13.24 258816 7323968 3000028 17.21 4378584 5293248 148 09:32:01 6801084 14536588 1407476 8.59 258880 7324244 2721276 15.61 3620500 5293524 728 09:33:01 5301252 13037472 2905808 17.74 259332 7324500 3650416 20.94 5114268 5293780 180 09:34:01 4762008 12498536 3444468 21.03 259348 7324760 4878176 27.99 5651516 5293904 188 09:35:01 3864048 11601764 4340640 26.50 260076 7325200 5185572 29.75 6544076 5294336 256 09:36:01 3847944 11585928 4356444 26.59 260100 7325428 5201580 29.84 6560372 5294564 200 09:37:01 5233352 12971568 2971648 18.14 260140 7325616 4276252 24.53 5180476 5294692 608 09:38:01 3765292 11504760 4437472 27.09 260676 7326320 5263012 30.20 6640824 5295384 36 09:39:01 3742276 11481948 4460288 27.23 260692 7326504 5279028 30.29 6664648 5295568 88 09:40:01 3726652 11466424 4475804 27.32 260700 7326600 5295024 30.38 6679164 5295664 372 09:41:01 3712444 11452488 4489700 27.41 260704 7326864 5295024 30.38 6693384 5295928 216 09:42:01 3698732 11439072 4503052 27.49 260704 7327160 5295024 30.38 6706452 5296224 528 09:43:01 3673540 11414060 4528140 27.64 260704 7327340 5311052 30.47 6731780 5296400 124 09:44:01 3648688 11389364 4552848 27.79 260708 7327496 5360388 30.75 6756516 5296552 292 09:45:01 6555852 14296900 1646944 10.05 260740 7327828 2768992 15.89 3861328 5296736 308 09:46:01 3862296 11604236 4337980 26.48 261256 7328176 5121700 29.38 6543176 5297080 112 09:47:01 3830796 11573124 4369040 26.67 261264 7328552 5153716 29.57 6573644 5297452 164 09:48:01 3817968 11560904 4381248 26.75 261276 7329144 5185772 29.75 6585416 5298048 68 09:49:01 3802824 11546236 4395876 26.83 261280 7329616 5185772 29.75 6599856 5298516 28 09:50:01 3796312 11540256 4401936 26.87 261288 7330136 5185772 29.75 6606600 5299020 124 09:51:01 3781436 11525844 4416304 26.96 261304 7330588 5185772 29.75 6621048 5299472 104 09:52:01 3751652 11496852 4445248 27.14 261308 7331360 5185772 29.75 6649652 5300256 108 09:53:01 3751676 11496900 4445180 27.14 261312 7331376 5185772 29.75 6649352 5300276 56 09:54:01 3751780 11497012 4445132 27.14 261316 7331384 5185772 29.75 6649208 5300284 232 09:55:01 3751812 11497052 4445168 27.14 261320 7331384 5185772 29.75 6648804 5300284 60 09:56:01 3311228 11056424 4885884 29.83 261916 7330728 6266508 35.95 7090768 5299500 408 09:57:01 2610696 10356348 5585504 34.10 261924 7331176 6450556 37.01 7791200 5299936 116 09:58:01 2388340 10134420 5807140 35.45 261928 7331600 6599296 37.86 8009756 5300356 124 09:59:01 2299660 10046172 5895236 35.99 261936 7332024 6648040 38.14 8099436 5300780 36 10:00:01 2266868 10013540 5927924 36.19 261944 7332188 6648040 38.14 8131860 5300940 116 10:01:01 2235392 9982456 5958976 36.38 261956 7332560 6664176 38.23 8161476 5301312 72 10:02:01 2218532 9965928 5975492 36.48 261968 7332876 6680204 38.33 8179552 5301632 404 10:03:01 2194236 9941860 5999532 36.62 261968 7333100 6712464 38.51 8201756 5301856 240 10:04:01 4062096 11809520 4133012 25.23 262024 7332844 5724512 32.84 6340112 5301576 348 10:05:01 2366444 10114720 5827064 35.57 262472 7333236 6700388 38.44 8033828 5301964 332 10:06:01 2143176 9892260 6049236 36.93 262484 7334032 6847504 39.29 8254272 5302756 744 10:07:01 2128528 9877780 6063668 37.02 262488 7334192 6847504 39.29 8268756 5302920 176 10:08:01 2115944 9865408 6075992 37.09 262500 7334404 6847504 39.29 8279236 5303120 420 10:09:01 2104604 9854196 6087204 37.16 262508 7334512 6847504 39.29 8291184 5303240 288 10:10:01 2093208 9842956 6098548 37.23 262508 7334668 6863508 39.38 8302500 5303396 260 10:11:01 2070528 9820560 6120884 37.36 262528 7334932 6863508 39.38 8324200 5303660 116 10:12:01 2040288 9790436 6150996 37.55 262532 7335048 6895924 39.56 8354236 5303768 312 10:13:01 2021048 9771608 6169748 37.66 262544 7335440 6928368 39.75 8371916 5304168 228 10:14:01 4841004 12831692 3111288 18.99 268692 7560468 4293800 24.63 5379704 5478952 145960 10:15:01 3477292 11470532 4471416 27.30 269180 7562340 5277276 30.28 6743324 5472748 428 10:16:01 7010748 15004332 939716 5.74 269216 7562572 1861156 10.68 3231968 5464812 832 10:17:01 3577548 11571892 4370044 26.68 269528 7562944 5183784 29.74 6663072 5451508 332 10:18:01 3563296 11557812 4384072 26.76 269540 7563104 5199800 29.83 6677792 5451492 76 10:19:01 3553784 11548508 4393372 26.82 269552 7563296 5215832 29.92 6687208 5451672 112 10:20:01 3535592 11530508 4411364 26.93 269556 7563484 5215832 29.92 6704644 5451860 140 10:21:01 3527424 11522688 4419152 26.98 269556 7563824 5215832 29.92 6712592 5452200 240 10:22:01 3498908 11494632 4447164 27.15 269556 7564284 5231824 30.02 6741024 5452632 392 10:23:01 3473984 11470052 4471812 27.30 269564 7564608 5231824 30.02 6764148 5452952 132 10:24:01 2883168 10879168 5062592 30.90 269904 7564124 5888840 33.79 7367384 5441700 500 10:25:01 2786792 10783288 5158220 31.49 269912 7564600 6014464 34.51 7463412 5442024 152 10:26:01 2701260 10698936 5242424 32.00 269928 7565756 6046480 34.69 7546496 5443116 40 10:27:01 2663688 10662032 5279316 32.23 269944 7566396 6062500 34.78 7581900 5443760 120 10:28:01 2663176 10661540 5279808 32.23 269952 7566408 6062500 34.78 7583096 5443772 64 10:29:01 2663532 10661900 5279428 32.23 269952 7566412 6062500 34.78 7582900 5443776 104 10:30:01 2663344 10661720 5279584 32.23 269960 7566416 6062500 34.78 7583036 5443776 68 10:31:01 2661224 10659636 5281656 32.24 269964 7566448 6062500 34.78 7585188 5443804 88 10:32:01 2661084 10659500 5281748 32.24 269964 7566448 6062500 34.78 7584928 5443808 76 10:33:01 2661092 10659512 5281736 32.24 269968 7566448 6062500 34.78 7585048 5443812 68 10:34:01 7206888 15433240 510796 3.12 275260 7774328 1296008 7.44 2862152 5623448 141408 Average: 4085702 11715154 4228420 25.81 250098 7238029 5130784 29.44 6348054 5282547 33606 08:58:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 08:59:01 lo 0.67 0.67 0.07 0.07 0.00 0.00 0.00 0.00 08:59:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:59:01 ens3 362.29 233.59 1625.34 67.39 0.00 0.00 0.00 0.00 09:00:01 lo 0.93 0.93 0.09 0.09 0.00 0.00 0.00 0.00 09:00:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:00:01 ens3 33.06 29.36 240.21 6.65 0.00 0.00 0.00 0.00 09:01:01 lo 5.86 5.86 0.60 0.60 0.00 0.00 0.00 0.00 09:01:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:01:01 ens3 401.28 341.21 6549.09 35.78 0.00 0.00 0.00 0.00 09:02:01 lo 0.47 0.47 0.04 0.04 0.00 0.00 0.00 0.00 09:02:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:02:01 ens3 322.96 266.52 4651.26 25.95 0.00 0.00 0.00 0.00 09:03:01 lo 1.39 1.39 0.14 0.14 0.00 0.00 0.00 0.00 09:03:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:03:01 ens3 186.03 125.74 2849.12 12.23 0.00 0.00 0.00 0.00 09:04:01 lo 1.20 1.20 0.11 0.11 0.00 0.00 0.00 0.00 09:04:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:04:01 ens3 215.74 126.56 3843.90 10.86 0.00 0.00 0.00 0.00 09:05:01 lo 3.98 3.98 0.40 0.40 0.00 0.00 0.00 0.00 09:05:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:05:01 ens3 3.90 1.33 0.98 0.40 0.00 0.00 0.00 0.00 09:06:01 lo 5.36 5.36 13.78 13.78 0.00 0.00 0.00 0.00 09:06:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:06:01 ens3 2.53 1.62 0.98 0.75 0.00 0.00 0.00 0.00 09:07:01 lo 51.31 51.31 39.94 39.94 0.00 0.00 0.00 0.00 09:07:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:07:01 ens3 1.73 1.37 0.56 0.46 0.00 0.00 0.00 0.00 09:08:01 lo 36.46 36.46 17.72 17.72 0.00 0.00 0.00 0.00 09:08:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:08:01 ens3 1.60 1.12 0.26 0.21 0.00 0.00 0.00 0.00 09:09:01 lo 3.25 3.25 1.01 1.01 0.00 0.00 0.00 0.00 09:09:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:09:01 ens3 1.87 1.90 0.91 0.75 0.00 0.00 0.00 0.00 09:10:01 lo 7.23 7.23 6.38 6.38 0.00 0.00 0.00 0.00 09:10:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:10:01 ens3 0.77 0.12 0.07 0.01 0.00 0.00 0.00 0.00 09:11:01 lo 0.98 0.98 0.09 0.09 0.00 0.00 0.00 0.00 09:11:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:11:01 ens3 0.87 0.52 0.36 0.28 0.00 0.00 0.00 0.00 09:12:01 lo 5.62 5.62 5.58 5.58 0.00 0.00 0.00 0.00 09:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:12:01 ens3 1.02 0.47 0.10 0.05 0.00 0.00 0.00 0.00 09:13:01 lo 32.39 32.39 21.36 21.36 0.00 0.00 0.00 0.00 09:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:13:01 ens3 0.90 0.68 0.16 0.14 0.00 0.00 0.00 0.00 09:14:02 lo 8.83 8.83 4.90 4.90 0.00 0.00 0.00 0.00 09:14:02 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:14:02 ens3 1.35 0.33 0.27 0.10 0.00 0.00 0.00 0.00 09:15:01 lo 32.59 32.59 11.41 11.41 0.00 0.00 0.00 0.00 09:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:15:01 ens3 0.86 0.41 0.36 0.27 0.00 0.00 0.00 0.00 09:16:01 lo 3.82 3.82 0.50 0.50 0.00 0.00 0.00 0.00 09:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:16:01 ens3 13.75 12.53 3.37 8.20 0.00 0.00 0.00 0.00 09:17:01 lo 17.41 17.41 21.40 21.40 0.00 0.00 0.00 0.00 09:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:17:01 ens3 1.15 1.28 0.19 0.20 0.00 0.00 0.00 0.00 09:18:01 lo 25.28 25.28 8.31 8.31 0.00 0.00 0.00 0.00 09:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:18:01 ens3 1.58 1.50 0.28 0.24 0.00 0.00 0.00 0.00 09:19:01 lo 22.16 22.16 6.80 6.80 0.00 0.00 0.00 0.00 09:19:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:19:01 ens3 1.93 1.65 0.67 0.52 0.00 0.00 0.00 0.00 09:20:01 lo 18.11 18.11 11.55 11.55 0.00 0.00 0.00 0.00 09:20:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:20:01 ens3 0.95 0.77 0.15 0.13 0.00 0.00 0.00 0.00 09:21:01 lo 41.53 41.53 13.78 13.78 0.00 0.00 0.00 0.00 09:21:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:21:01 ens3 1.27 0.97 0.23 0.20 0.00 0.00 0.00 0.00 09:22:01 lo 26.18 26.18 11.42 11.42 0.00 0.00 0.00 0.00 09:22:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:22:01 ens3 1.45 0.80 0.23 0.14 0.00 0.00 0.00 0.00 09:23:01 lo 3.68 3.68 0.49 0.49 0.00 0.00 0.00 0.00 09:23:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:23:01 ens3 2.42 2.20 0.90 0.75 0.00 0.00 0.00 0.00 09:24:01 lo 15.75 15.75 10.14 10.14 0.00 0.00 0.00 0.00 09:24:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:24:01 ens3 2.50 1.50 0.51 0.35 0.00 0.00 0.00 0.00 09:25:01 lo 13.40 13.40 4.65 4.65 0.00 0.00 0.00 0.00 09:25:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:25:01 ens3 1.30 0.88 0.38 0.30 0.00 0.00 0.00 0.00 09:26:01 lo 20.60 20.60 10.07 10.07 0.00 0.00 0.00 0.00 09:26:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:26:01 ens3 1.10 0.85 0.19 0.16 0.00 0.00 0.00 0.00 09:27:01 lo 11.18 11.18 3.99 3.99 0.00 0.00 0.00 0.00 09:27:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:27:01 ens3 1.47 0.93 0.25 0.19 0.00 0.00 0.00 0.00 09:28:01 lo 5.65 5.65 6.27 6.27 0.00 0.00 0.00 0.00 09:28:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:28:01 ens3 1.37 0.93 0.44 0.37 0.00 0.00 0.00 0.00 09:29:01 lo 7.97 7.97 3.08 3.08 0.00 0.00 0.00 0.00 09:29:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:29:01 ens3 0.65 0.42 0.20 0.12 0.00 0.00 0.00 0.00 09:30:01 lo 3.67 3.67 0.56 0.56 0.00 0.00 0.00 0.00 09:30:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:30:01 ens3 0.73 0.58 0.11 0.10 0.00 0.00 0.00 0.00 09:31:01 lo 32.47 32.47 16.06 16.06 0.00 0.00 0.00 0.00 09:31:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:31:01 ens3 0.80 0.67 0.13 0.12 0.00 0.00 0.00 0.00 09:32:01 lo 18.50 18.50 7.09 7.09 0.00 0.00 0.00 0.00 09:32:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:32:01 ens3 2.03 1.28 0.52 0.40 0.00 0.00 0.00 0.00 09:33:01 lo 19.81 19.81 19.77 19.77 0.00 0.00 0.00 0.00 09:33:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:33:01 ens3 1.10 0.72 0.40 0.32 0.00 0.00 0.00 0.00 09:34:01 lo 4.15 4.15 0.78 0.78 0.00 0.00 0.00 0.00 09:34:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:34:01 ens3 1.20 0.98 0.29 0.21 0.00 0.00 0.00 0.00 09:35:01 lo 44.56 44.56 17.76 17.76 0.00 0.00 0.00 0.00 09:35:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:35:01 ens3 0.77 0.58 0.14 0.12 0.00 0.00 0.00 0.00 09:36:01 lo 35.66 35.66 10.18 10.18 0.00 0.00 0.00 0.00 09:36:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:36:01 ens3 0.85 0.63 0.14 0.12 0.00 0.00 0.00 0.00 09:37:01 lo 14.33 14.33 4.06 4.06 0.00 0.00 0.00 0.00 09:37:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:37:01 ens3 1.58 0.88 0.42 0.31 0.00 0.00 0.00 0.00 09:38:01 lo 30.58 30.58 21.94 21.94 0.00 0.00 0.00 0.00 09:38:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:38:01 ens3 0.65 0.52 0.11 0.09 0.00 0.00 0.00 0.00 09:39:01 lo 11.36 11.36 7.03 7.03 0.00 0.00 0.00 0.00 09:39:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:39:01 ens3 0.93 0.68 0.27 0.19 0.00 0.00 0.00 0.00 09:40:01 lo 13.33 13.33 5.74 5.74 0.00 0.00 0.00 0.00 09:40:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:40:01 ens3 1.25 0.78 0.19 0.12 0.00 0.00 0.00 0.00 09:41:01 lo 23.35 23.35 10.00 10.00 0.00 0.00 0.00 0.00 09:41:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:41:01 ens3 0.70 0.67 0.11 0.11 0.00 0.00 0.00 0.00 09:42:01 lo 16.25 16.25 6.60 6.60 0.00 0.00 0.00 0.00 09:42:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:42:01 ens3 0.63 0.78 0.12 0.12 0.00 0.00 0.00 0.00 09:43:01 lo 10.23 10.23 4.58 4.58 0.00 0.00 0.00 0.00 09:43:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:43:01 ens3 0.63 0.53 0.09 0.08 0.00 0.00 0.00 0.00 09:44:01 lo 18.68 18.68 8.50 8.50 0.00 0.00 0.00 0.00 09:44:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:44:01 ens3 0.82 0.88 0.24 0.18 0.00 0.00 0.00 0.00 09:45:01 lo 21.56 21.56 6.94 6.94 0.00 0.00 0.00 0.00 09:45:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:45:01 ens3 0.73 0.67 0.10 0.11 0.00 0.00 0.00 0.00 09:46:01 lo 34.03 34.03 13.03 13.03 0.00 0.00 0.00 0.00 09:46:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:46:01 ens3 0.93 0.90 0.13 0.13 0.00 0.00 0.00 0.00 09:47:01 lo 30.99 30.99 9.85 9.85 0.00 0.00 0.00 0.00 09:47:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:47:01 ens3 0.73 0.78 0.12 0.12 0.00 0.00 0.00 0.00 09:48:01 lo 41.76 41.76 12.86 12.86 0.00 0.00 0.00 0.00 09:48:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:48:01 ens3 0.47 0.38 0.06 0.06 0.00 0.00 0.00 0.00 09:49:01 lo 32.81 32.81 9.28 9.28 0.00 0.00 0.00 0.00 09:49:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:49:01 ens3 1.45 0.93 0.51 0.35 0.00 0.00 0.00 0.00 09:50:01 lo 29.63 29.63 9.13 9.13 0.00 0.00 0.00 0.00 09:50:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:50:01 ens3 0.40 0.32 0.05 0.05 0.00 0.00 0.00 0.00 09:51:01 lo 29.50 29.50 9.22 9.22 0.00 0.00 0.00 0.00 09:51:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:51:01 ens3 0.78 0.08 0.08 0.01 0.00 0.00 0.00 0.00 09:52:01 lo 62.41 62.41 19.31 19.31 0.00 0.00 0.00 0.00 09:52:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:52:01 ens3 1.23 0.57 0.35 0.23 0.00 0.00 0.00 0.00 09:53:01 lo 0.63 0.63 0.06 0.06 0.00 0.00 0.00 0.00 09:53:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:53:01 ens3 0.37 0.07 0.03 0.01 0.00 0.00 0.00 0.00 09:54:01 lo 0.60 0.60 0.06 0.06 0.00 0.00 0.00 0.00 09:54:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:54:01 ens3 0.43 0.17 0.14 0.07 0.00 0.00 0.00 0.00 09:55:01 lo 0.25 0.25 0.02 0.02 0.00 0.00 0.00 0.00 09:55:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:55:01 ens3 0.20 0.00 0.01 0.00 0.00 0.00 0.00 0.00 09:56:01 lo 2.75 2.75 0.28 0.28 0.00 0.00 0.00 0.00 09:56:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:56:01 ens3 0.85 0.75 0.10 0.10 0.00 0.00 0.00 0.00 09:57:01 lo 24.06 24.06 22.71 22.71 0.00 0.00 0.00 0.00 09:57:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:57:01 ens3 0.80 0.63 0.15 0.13 0.00 0.00 0.00 0.00 09:58:01 lo 33.78 33.78 14.29 14.29 0.00 0.00 0.00 0.00 09:58:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:58:01 ens3 0.78 0.58 0.13 0.11 0.00 0.00 0.00 0.00 09:59:01 lo 23.40 23.40 11.13 11.13 0.00 0.00 0.00 0.00 09:59:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:59:01 ens3 0.85 0.57 0.24 0.16 0.00 0.00 0.00 0.00 10:00:01 lo 22.31 22.31 8.84 8.84 0.00 0.00 0.00 0.00 10:00:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:00:01 ens3 0.78 0.55 0.13 0.11 0.00 0.00 0.00 0.00 10:01:01 lo 31.24 31.24 10.63 10.63 0.00 0.00 0.00 0.00 10:01:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:01:01 ens3 0.45 0.30 0.07 0.06 0.00 0.00 0.00 0.00 10:02:01 lo 18.18 18.18 8.38 8.38 0.00 0.00 0.00 0.00 10:02:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:02:01 ens3 1.13 0.93 0.18 0.17 0.00 0.00 0.00 0.00 10:03:01 lo 21.63 21.63 8.31 8.31 0.00 0.00 0.00 0.00 10:03:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:03:01 ens3 0.47 0.47 0.08 0.07 0.00 0.00 0.00 0.00 10:04:01 lo 4.95 4.95 2.12 2.12 0.00 0.00 0.00 0.00 10:04:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:04:01 ens3 1.13 1.15 0.26 0.20 0.00 0.00 0.00 0.00 10:05:01 lo 23.30 23.30 22.64 22.64 0.00 0.00 0.00 0.00 10:05:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:05:01 ens3 83.29 64.32 52.64 75.71 0.00 0.00 0.00 0.00 10:06:01 lo 44.71 44.71 19.71 19.71 0.00 0.00 0.00 0.00 10:06:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:06:01 ens3 0.68 0.58 0.12 0.11 0.00 0.00 0.00 0.00 10:07:01 lo 8.90 8.90 4.63 4.63 0.00 0.00 0.00 0.00 10:07:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:07:01 ens3 0.55 0.37 0.09 0.07 0.00 0.00 0.00 0.00 10:08:01 lo 19.88 19.88 10.92 10.92 0.00 0.00 0.00 0.00 10:08:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:08:01 ens3 0.70 0.55 0.12 0.11 0.00 0.00 0.00 0.00 10:09:01 lo 13.60 13.60 7.58 7.58 0.00 0.00 0.00 0.00 10:09:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:09:01 ens3 0.83 0.57 0.24 0.16 0.00 0.00 0.00 0.00 10:10:01 lo 7.48 7.48 4.89 4.89 0.00 0.00 0.00 0.00 10:10:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:10:01 ens3 0.48 0.38 0.08 0.07 0.00 0.00 0.00 0.00 10:11:01 lo 24.90 24.90 12.01 12.01 0.00 0.00 0.00 0.00 10:11:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:11:01 ens3 0.48 0.33 0.08 0.07 0.00 0.00 0.00 0.00 10:12:01 lo 10.21 10.21 6.15 6.15 0.00 0.00 0.00 0.00 10:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:12:01 ens3 0.58 0.40 0.08 0.07 0.00 0.00 0.00 0.00 10:13:01 lo 35.88 35.88 13.52 13.52 0.00 0.00 0.00 0.00 10:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:13:01 ens3 0.38 0.30 0.07 0.06 0.00 0.00 0.00 0.00 10:14:01 lo 4.90 4.90 1.88 1.88 0.00 0.00 0.00 0.00 10:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:14:01 ens3 2.87 2.50 1.04 0.82 0.00 0.00 0.00 0.00 10:15:01 lo 42.18 42.18 36.62 36.62 0.00 0.00 0.00 0.00 10:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:15:01 ens3 1.43 1.07 0.47 0.38 0.00 0.00 0.00 0.00 10:16:01 lo 30.36 30.36 10.94 10.94 0.00 0.00 0.00 0.00 10:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:16:01 ens3 1.25 1.03 0.22 0.20 0.00 0.00 0.00 0.00 10:17:01 lo 33.34 33.34 17.53 17.53 0.00 0.00 0.00 0.00 10:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:17:01 ens3 0.80 0.65 0.12 0.11 0.00 0.00 0.00 0.00 10:18:01 lo 11.16 11.16 4.53 4.53 0.00 0.00 0.00 0.00 10:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:18:01 ens3 0.82 0.65 0.14 0.13 0.00 0.00 0.00 0.00 10:19:01 lo 18.41 18.41 7.27 7.27 0.00 0.00 0.00 0.00 10:19:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:19:01 ens3 1.00 0.72 0.27 0.19 0.00 0.00 0.00 0.00 10:20:01 lo 16.55 16.55 6.36 6.36 0.00 0.00 0.00 0.00 10:20:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:20:01 ens3 0.68 0.47 0.10 0.09 0.00 0.00 0.00 0.00 10:21:01 lo 27.63 27.63 9.41 9.41 0.00 0.00 0.00 0.00 10:21:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:21:01 ens3 1.55 0.60 0.24 0.12 0.00 0.00 0.00 0.00 10:22:01 lo 32.14 32.14 11.57 11.57 0.00 0.00 0.00 0.00 10:22:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:22:01 ens3 1.73 0.78 0.48 0.33 0.00 0.00 0.00 0.00 10:23:01 lo 38.06 38.06 12.18 12.18 0.00 0.00 0.00 0.00 10:23:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:23:01 ens3 1.25 1.02 0.44 0.37 0.00 0.00 0.00 0.00 10:24:01 lo 22.06 22.06 13.71 13.71 0.00 0.00 0.00 0.00 10:24:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:24:01 ens3 1.15 0.88 0.25 0.18 0.00 0.00 0.00 0.00 10:25:01 lo 31.38 31.38 10.32 10.32 0.00 0.00 0.00 0.00 10:25:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:25:01 ens3 1.67 0.75 0.26 0.16 0.00 0.00 0.00 0.00 10:26:01 lo 87.24 87.24 31.96 31.96 0.00 0.00 0.00 0.00 10:26:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:26:01 ens3 1.57 0.72 0.67 0.47 0.00 0.00 0.00 0.00 10:27:01 lo 72.77 72.77 23.77 23.77 0.00 0.00 0.00 0.00 10:27:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:27:01 ens3 0.63 0.40 0.10 0.08 0.00 0.00 0.00 0.00 10:28:01 lo 2.45 2.45 0.99 0.99 0.00 0.00 0.00 0.00 10:28:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:28:01 ens3 0.30 0.15 0.03 0.02 0.00 0.00 0.00 0.00 10:29:01 lo 0.80 0.80 0.09 0.09 0.00 0.00 0.00 0.00 10:29:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:29:01 ens3 0.85 0.12 0.16 0.07 0.00 0.00 0.00 0.00 10:30:01 lo 0.68 0.68 0.05 0.05 0.00 0.00 0.00 0.00 10:30:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:30:01 ens3 0.65 0.10 0.03 0.01 0.00 0.00 0.00 0.00 10:31:01 lo 0.87 0.87 0.08 0.08 0.00 0.00 0.00 0.00 10:31:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:31:01 ens3 1.22 0.15 0.07 0.03 0.00 0.00 0.00 0.00 10:32:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 10:32:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:32:01 ens3 1.83 0.47 0.36 0.22 0.00 0.00 0.00 0.00 10:33:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:33:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:33:01 ens3 0.70 0.37 0.32 0.25 0.00 0.00 0.00 0.00 10:34:01 lo 0.58 0.58 0.05 0.05 0.00 0.00 0.00 0.00 10:34:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:34:01 ens3 142.54 115.46 1957.19 16.51 0.00 0.00 0.00 0.00 Average: lo 19.34 19.34 8.90 8.90 0.00 0.00 0.00 0.00 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: ens3 19.35 14.38 227.31 2.88 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 5.4.0-190-generic (prd-ubuntu2004-docker-4c-16g-28080) 09/26/24 _x86_64_ (4 CPU) 08:57:36 LINUX RESTART (4 CPU) 08:58:01 CPU %user %nice %system %iowait %steal %idle 08:59:01 all 17.31 14.24 13.43 11.35 0.14 43.52 08:59:01 0 10.88 16.09 13.63 10.81 0.17 48.42 08:59:01 1 27.39 10.82 14.58 10.15 0.15 36.90 08:59:01 2 16.37 15.32 13.06 9.84 0.12 45.29 08:59:01 3 14.51 14.76 12.45 14.61 0.12 43.54 09:00:01 all 16.24 2.73 4.06 8.03 0.08 68.87 09:00:01 0 7.88 3.50 4.19 9.24 0.05 75.13 09:00:01 1 32.55 2.36 5.01 6.71 0.12 53.25 09:00:01 2 9.61 3.09 3.14 3.15 0.07 80.95 09:00:01 3 14.92 1.96 3.91 12.99 0.07 66.15 09:01:01 all 66.67 0.00 4.39 15.88 0.12 12.94 09:01:01 0 66.70 0.00 4.47 10.76 0.12 17.95 09:01:01 1 70.97 0.00 4.29 16.23 0.12 8.39 09:01:01 2 64.06 0.00 3.87 15.95 0.12 16.00 09:01:01 3 64.96 0.00 4.92 20.58 0.12 9.42 09:02:01 all 52.79 0.00 2.01 7.17 0.12 37.91 09:02:01 0 69.09 0.00 2.21 4.51 0.12 24.07 09:02:01 1 52.70 0.00 1.11 0.79 0.14 45.26 09:02:01 2 45.91 0.00 2.84 15.89 0.10 35.26 09:02:01 3 43.53 0.00 1.86 7.39 0.14 47.08 09:03:01 all 60.96 0.00 3.90 11.67 0.17 23.30 09:03:01 0 67.20 0.00 4.10 9.93 0.17 18.60 09:03:01 1 56.62 0.00 4.21 12.08 0.15 26.94 09:03:01 2 55.56 0.00 3.87 19.04 0.19 21.34 09:03:01 3 64.52 0.00 3.41 5.60 0.17 26.30 09:04:01 all 56.75 0.00 4.25 28.09 0.14 10.77 09:04:01 0 51.93 0.00 3.72 15.86 0.14 28.35 09:04:01 1 55.73 0.00 5.30 33.66 0.16 5.15 09:04:01 2 57.49 0.00 4.51 33.44 0.12 4.43 09:04:01 3 61.89 0.00 3.46 29.50 0.14 5.02 09:05:01 all 55.84 0.00 2.47 27.16 0.13 14.40 09:05:01 0 56.07 0.00 2.02 15.70 0.12 26.10 09:05:01 1 56.02 0.00 2.63 35.73 0.12 5.50 09:05:01 2 57.13 0.00 2.89 20.26 0.14 19.59 09:05:01 3 54.16 0.00 2.34 36.93 0.14 6.44 09:06:01 all 65.59 0.00 2.41 1.24 0.15 30.61 09:06:01 0 67.51 0.00 2.42 0.27 0.14 29.67 09:06:01 1 63.70 0.00 2.64 3.22 0.12 30.32 09:06:01 2 64.16 0.00 2.22 0.15 0.19 33.28 09:06:01 3 67.02 0.00 2.37 1.30 0.15 29.16 09:07:01 all 41.74 0.00 1.46 0.62 0.10 56.09 09:07:01 0 41.46 0.00 1.51 0.30 0.10 56.63 09:07:01 1 42.07 0.00 1.38 1.22 0.10 55.23 09:07:01 2 44.22 0.00 1.56 0.65 0.10 53.47 09:07:01 3 39.21 0.00 1.39 0.29 0.10 59.01 09:08:01 all 25.39 0.00 1.21 0.33 0.11 72.96 09:08:01 0 25.72 0.00 1.22 0.10 0.12 72.84 09:08:01 1 26.58 0.00 1.16 0.81 0.12 71.34 09:08:01 2 25.52 0.00 0.98 0.02 0.12 73.37 09:08:01 3 23.74 0.00 1.48 0.42 0.10 74.26 09:09:01 all 50.46 0.00 1.83 3.37 0.12 44.22 09:09:01 0 58.27 0.00 1.94 3.63 0.10 36.05 09:09:01 1 54.20 0.00 1.47 1.17 0.12 43.05 09:09:01 2 45.54 0.00 1.84 3.46 0.12 49.04 09:09:01 3 43.85 0.00 2.08 5.23 0.13 48.71 09:09:01 CPU %user %nice %system %iowait %steal %idle 09:10:01 all 4.92 0.00 0.49 0.06 0.09 94.44 09:10:01 0 4.58 0.00 0.47 0.00 0.10 94.85 09:10:01 1 6.00 0.00 0.30 0.23 0.07 93.40 09:10:01 2 5.62 0.00 0.65 0.00 0.10 93.62 09:10:01 3 3.49 0.00 0.54 0.00 0.10 95.88 09:11:01 all 3.70 0.00 0.62 0.13 0.13 95.42 09:11:01 0 3.63 0.00 0.70 0.00 0.13 95.54 09:11:01 1 3.36 0.00 0.58 0.52 0.12 95.43 09:11:01 2 3.80 0.00 0.67 0.00 0.12 95.41 09:11:01 3 4.04 0.00 0.50 0.00 0.13 95.32 09:12:01 all 64.21 0.00 2.00 0.42 0.12 33.25 09:12:01 0 68.38 0.00 2.04 0.03 0.12 29.43 09:12:01 1 58.06 0.00 2.19 1.09 0.13 38.53 09:12:01 2 65.81 0.00 1.98 0.45 0.10 31.66 09:12:01 3 64.63 0.00 1.80 0.10 0.12 33.35 09:13:01 all 15.77 0.00 0.50 0.05 0.10 83.57 09:13:01 0 15.60 0.00 0.47 0.00 0.10 83.83 09:13:01 1 15.37 0.00 0.39 0.17 0.10 83.98 09:13:01 2 16.14 0.00 0.62 0.00 0.12 83.13 09:13:01 3 15.97 0.00 0.54 0.03 0.10 83.36 09:14:02 all 3.00 0.00 0.20 0.02 0.10 96.68 09:14:02 0 2.72 0.00 0.34 0.00 0.10 96.85 09:14:02 1 3.81 0.00 0.13 0.05 0.10 95.91 09:14:02 2 3.11 0.00 0.17 0.00 0.13 96.59 09:14:02 3 2.35 0.00 0.17 0.02 0.08 97.39 09:15:01 all 4.62 0.00 0.34 0.03 0.09 94.93 09:15:01 0 3.85 0.00 0.32 0.00 0.07 95.75 09:15:01 1 5.70 0.00 0.32 0.12 0.08 93.78 09:15:01 2 4.34 0.00 0.34 0.00 0.12 95.20 09:15:01 3 4.57 0.00 0.36 0.00 0.09 94.99 09:16:01 all 32.86 0.00 1.33 0.34 0.09 65.38 09:16:01 0 31.24 0.00 1.33 0.12 0.08 67.24 09:16:01 1 38.43 0.00 1.59 0.57 0.10 59.31 09:16:01 2 31.25 0.00 1.34 0.08 0.08 67.24 09:16:01 3 30.51 0.00 1.04 0.61 0.10 67.74 09:17:01 all 36.47 0.00 1.14 1.55 0.10 60.74 09:17:01 0 34.58 0.00 0.97 0.00 0.08 64.37 09:17:01 1 36.52 0.00 1.27 2.45 0.10 59.66 09:17:01 2 37.26 0.00 1.32 3.68 0.10 57.63 09:17:01 3 37.51 0.00 1.00 0.07 0.12 61.30 09:18:01 all 4.04 0.00 0.36 0.02 0.08 95.50 09:18:01 0 4.46 0.00 0.40 0.05 0.08 95.01 09:18:01 1 3.61 0.00 0.32 0.03 0.08 95.95 09:18:01 2 3.70 0.00 0.35 0.00 0.10 95.85 09:18:01 3 4.38 0.00 0.37 0.00 0.07 95.19 09:19:01 all 10.02 0.00 0.61 0.05 0.08 89.24 09:19:01 0 9.34 0.00 0.60 0.13 0.08 89.85 09:19:01 1 9.41 0.00 0.60 0.05 0.08 89.85 09:19:01 2 10.43 0.00 0.62 0.02 0.07 88.87 09:19:01 3 10.89 0.00 0.61 0.00 0.10 88.40 09:20:01 all 33.33 0.00 1.02 0.42 0.11 65.12 09:20:01 0 30.47 0.00 1.12 0.28 0.10 68.02 09:20:01 1 34.77 0.00 0.87 1.39 0.12 62.86 09:20:01 2 34.85 0.00 1.17 0.00 0.10 63.88 09:20:01 3 33.24 0.00 0.91 0.02 0.13 65.70 09:20:01 CPU %user %nice %system %iowait %steal %idle 09:21:01 all 4.92 0.00 0.36 0.01 0.10 94.61 09:21:01 0 4.89 0.00 0.32 0.00 0.10 94.69 09:21:01 1 5.12 0.00 0.30 0.03 0.08 94.47 09:21:01 2 4.63 0.00 0.45 0.02 0.12 94.79 09:21:01 3 5.04 0.00 0.38 0.00 0.08 94.49 09:22:01 all 33.02 0.00 1.32 0.34 0.10 65.22 09:22:01 0 34.67 0.00 1.18 0.07 0.08 64.00 09:22:01 1 32.95 0.00 1.22 1.22 0.10 64.50 09:22:01 2 32.37 0.00 1.46 0.05 0.10 66.01 09:22:01 3 32.08 0.00 1.41 0.03 0.12 66.37 09:23:01 all 34.17 0.00 1.30 0.36 0.09 64.08 09:23:01 0 39.97 0.00 1.44 0.32 0.08 58.20 09:23:01 1 31.89 0.00 1.14 1.06 0.10 65.82 09:23:01 2 32.30 0.00 1.27 0.07 0.10 66.26 09:23:01 3 32.51 0.00 1.35 0.00 0.07 66.06 09:24:01 all 29.59 0.00 1.07 0.49 0.10 68.75 09:24:01 0 31.36 0.00 1.12 0.13 0.08 67.30 09:24:01 1 30.60 0.00 1.24 1.78 0.10 66.28 09:24:01 2 28.10 0.00 0.98 0.00 0.10 70.82 09:24:01 3 28.28 0.00 0.95 0.05 0.10 70.61 09:25:01 all 48.18 0.00 1.56 0.27 0.10 49.89 09:25:01 0 46.69 0.00 1.47 0.25 0.08 51.51 09:25:01 1 48.63 0.00 1.58 0.65 0.10 49.04 09:25:01 2 48.67 0.00 1.55 0.15 0.08 49.55 09:25:01 3 48.72 0.00 1.66 0.03 0.12 49.47 09:26:01 all 19.79 0.00 0.55 0.25 0.10 79.32 09:26:01 0 19.91 0.00 0.80 0.17 0.10 79.02 09:26:01 1 20.01 0.00 0.27 0.55 0.07 79.11 09:26:01 2 19.82 0.00 0.48 0.02 0.10 79.58 09:26:01 3 19.41 0.00 0.64 0.27 0.12 79.56 09:27:01 all 15.90 0.00 0.72 0.02 0.09 83.27 09:27:01 0 16.93 0.00 0.86 0.02 0.08 82.11 09:27:01 1 13.38 0.00 0.65 0.02 0.10 85.85 09:27:01 2 16.21 0.00 0.79 0.02 0.08 82.90 09:27:01 3 17.09 0.00 0.59 0.03 0.08 82.21 09:28:01 all 30.00 0.00 1.12 0.29 0.08 68.51 09:28:01 0 29.04 0.00 1.12 0.30 0.10 69.44 09:28:01 1 31.55 0.00 1.15 0.57 0.07 66.66 09:28:01 2 28.60 0.00 0.91 0.17 0.10 70.23 09:28:01 3 30.80 0.00 1.29 0.13 0.07 67.71 09:29:01 all 12.33 0.00 0.44 0.23 0.10 86.90 09:29:01 0 11.57 0.00 0.57 0.45 0.10 87.31 09:29:01 1 13.01 0.00 0.67 0.49 0.08 85.75 09:29:01 2 11.37 0.00 0.32 0.00 0.10 88.21 09:29:01 3 13.37 0.00 0.22 0.00 0.10 86.31 09:30:01 all 23.32 0.00 0.85 0.02 0.08 75.72 09:30:01 0 24.34 0.00 0.84 0.03 0.07 74.72 09:30:01 1 24.10 0.00 0.64 0.03 0.10 75.13 09:30:01 2 19.83 0.00 0.97 0.00 0.08 79.12 09:30:01 3 25.03 0.00 0.97 0.02 0.07 73.92 09:31:01 all 44.66 0.00 1.29 0.56 0.10 53.38 09:31:01 0 46.14 0.00 1.42 0.33 0.12 51.98 09:31:01 1 42.61 0.00 1.32 1.79 0.12 54.16 09:31:01 2 45.37 0.00 1.15 0.02 0.07 53.39 09:31:01 3 44.54 0.00 1.27 0.12 0.10 53.97 09:31:01 CPU %user %nice %system %iowait %steal %idle 09:32:01 all 21.92 0.00 0.81 0.02 0.09 77.17 09:32:01 0 22.37 0.00 0.70 0.00 0.08 76.85 09:32:01 1 22.94 0.00 1.02 0.05 0.10 75.89 09:32:01 2 23.33 0.00 0.79 0.00 0.08 75.80 09:32:01 3 19.04 0.00 0.72 0.02 0.08 80.14 09:33:01 all 23.08 0.00 0.58 0.26 0.08 76.00 09:33:01 0 23.94 0.00 0.45 0.15 0.10 75.35 09:33:01 1 21.35 0.00 0.62 0.77 0.08 77.18 09:33:01 2 23.80 0.00 0.62 0.07 0.08 75.43 09:33:01 3 23.21 0.00 0.62 0.07 0.07 76.03 09:34:01 all 37.31 0.00 1.46 0.02 0.10 61.11 09:34:01 0 37.23 0.00 1.04 0.05 0.10 61.58 09:34:01 1 37.60 0.00 2.03 0.00 0.10 60.27 09:34:01 2 39.25 0.00 1.60 0.00 0.10 59.04 09:34:01 3 35.17 0.00 1.17 0.02 0.10 63.54 09:35:01 all 19.83 0.00 0.63 0.32 0.09 79.14 09:35:01 0 18.89 0.00 0.63 0.00 0.07 80.41 09:35:01 1 20.11 0.00 0.62 0.05 0.10 79.12 09:35:01 2 19.61 0.00 0.61 0.02 0.10 79.66 09:35:01 3 20.73 0.00 0.63 1.20 0.08 77.35 09:36:01 all 4.63 0.00 0.37 0.02 0.08 94.90 09:36:01 0 4.58 0.00 0.42 0.00 0.08 94.92 09:36:01 1 4.29 0.00 0.37 0.03 0.08 95.22 09:36:01 2 5.36 0.00 0.31 0.03 0.07 94.22 09:36:01 3 4.26 0.00 0.37 0.03 0.08 95.25 09:37:01 all 35.33 0.00 1.43 0.03 0.10 63.11 09:37:01 0 35.40 0.00 1.36 0.00 0.12 63.13 09:37:01 1 34.91 0.00 1.49 0.03 0.10 63.47 09:37:01 2 36.56 0.00 0.88 0.03 0.10 62.42 09:37:01 3 34.46 0.00 1.99 0.03 0.10 63.42 09:38:01 all 25.36 0.00 0.68 0.28 0.09 73.59 09:38:01 0 24.14 0.00 0.62 0.00 0.08 75.15 09:38:01 1 25.64 0.00 0.62 0.03 0.10 73.61 09:38:01 2 25.61 0.00 0.75 0.67 0.08 72.89 09:38:01 3 26.03 0.00 0.72 0.43 0.10 72.72 09:39:01 all 3.03 0.00 0.17 0.01 0.07 96.73 09:39:01 0 2.46 0.00 0.15 0.00 0.07 97.33 09:39:01 1 4.08 0.00 0.18 0.02 0.07 95.65 09:39:01 2 2.91 0.00 0.27 0.02 0.08 96.73 09:39:01 3 2.66 0.00 0.08 0.00 0.05 97.21 09:40:01 all 2.65 0.00 0.22 0.01 0.07 97.06 09:40:01 0 2.27 0.00 0.13 0.00 0.07 97.53 09:40:01 1 4.23 0.00 0.30 0.02 0.10 95.36 09:40:01 2 2.14 0.00 0.25 0.02 0.07 97.53 09:40:01 3 1.93 0.00 0.18 0.00 0.05 97.83 09:41:01 all 3.18 0.00 0.24 0.01 0.08 96.49 09:41:01 0 2.68 0.00 0.25 0.00 0.08 96.99 09:41:01 1 4.57 0.00 0.26 0.02 0.08 95.07 09:41:01 2 2.81 0.00 0.25 0.03 0.08 96.83 09:41:01 3 2.64 0.00 0.20 0.00 0.07 97.09 09:42:01 all 2.06 0.00 0.20 0.02 0.07 97.65 09:42:01 0 1.52 0.00 0.20 0.00 0.07 98.21 09:42:01 1 3.43 0.00 0.20 0.00 0.07 96.30 09:42:01 2 1.76 0.00 0.20 0.07 0.08 97.89 09:42:01 3 1.50 0.00 0.18 0.02 0.08 98.21 09:42:01 CPU %user %nice %system %iowait %steal %idle 09:43:01 all 1.67 0.00 0.18 0.03 0.07 98.07 09:43:01 0 1.67 0.00 0.22 0.02 0.07 98.03 09:43:01 1 2.29 0.00 0.15 0.05 0.07 97.45 09:43:01 2 1.60 0.00 0.22 0.03 0.08 98.06 09:43:01 3 1.10 0.00 0.12 0.00 0.05 98.73 09:44:01 all 1.67 0.00 0.17 0.01 0.08 98.08 09:44:01 0 1.51 0.00 0.08 0.00 0.07 98.34 09:44:01 1 1.44 0.00 0.17 0.02 0.08 98.29 09:44:01 2 2.06 0.00 0.20 0.02 0.07 97.66 09:44:01 3 1.69 0.00 0.22 0.00 0.08 98.01 09:45:01 all 22.54 0.00 0.86 0.03 0.08 76.49 09:45:01 0 23.68 0.00 1.09 0.02 0.08 75.13 09:45:01 1 23.26 0.00 0.70 0.02 0.07 75.96 09:45:01 2 23.08 0.00 0.82 0.03 0.07 76.00 09:45:01 3 20.15 0.00 0.84 0.03 0.08 78.89 09:46:01 all 32.18 0.00 1.01 0.29 0.09 66.43 09:46:01 0 32.51 0.00 1.10 0.00 0.07 66.32 09:46:01 1 33.95 0.00 0.87 0.08 0.10 64.99 09:46:01 2 31.21 0.00 0.99 1.07 0.08 66.65 09:46:01 3 31.07 0.00 1.07 0.00 0.10 67.76 09:47:01 all 4.56 0.00 0.32 0.02 0.07 95.04 09:47:01 0 4.58 0.00 0.22 0.00 0.05 95.16 09:47:01 1 4.43 0.00 0.28 0.02 0.07 95.20 09:47:01 2 4.22 0.00 0.33 0.05 0.08 95.32 09:47:01 3 5.03 0.00 0.43 0.00 0.07 94.47 09:48:01 all 4.15 0.00 0.31 0.02 0.08 95.43 09:48:01 0 4.40 0.00 0.32 0.00 0.08 95.20 09:48:01 1 4.07 0.00 0.35 0.03 0.10 95.44 09:48:01 2 4.22 0.00 0.27 0.05 0.08 95.38 09:48:01 3 3.92 0.00 0.30 0.00 0.07 95.71 09:49:01 all 2.75 0.00 0.33 0.02 0.09 96.82 09:49:01 0 2.72 0.00 0.27 0.00 0.12 96.90 09:49:01 1 2.93 0.00 0.39 0.03 0.08 96.57 09:49:01 2 2.47 0.00 0.35 0.03 0.08 97.06 09:49:01 3 2.87 0.00 0.32 0.00 0.07 96.74 09:50:01 all 2.36 0.00 0.25 0.05 0.06 97.28 09:50:01 0 2.38 0.00 0.32 0.00 0.07 97.23 09:50:01 1 2.05 0.00 0.37 0.10 0.07 97.41 09:50:01 2 2.40 0.00 0.22 0.08 0.05 97.25 09:50:01 3 2.62 0.00 0.10 0.00 0.05 97.23 09:51:01 all 2.39 0.00 0.29 0.02 0.08 97.23 09:51:01 0 2.12 0.00 0.22 0.00 0.07 97.60 09:51:01 1 2.62 0.00 0.48 0.07 0.10 96.73 09:51:01 2 2.25 0.00 0.22 0.00 0.07 97.46 09:51:01 3 2.58 0.00 0.23 0.00 0.07 97.12 09:52:01 all 3.91 0.00 0.32 0.02 0.08 95.66 09:52:01 0 4.05 0.00 0.27 0.00 0.07 95.62 09:52:01 1 3.82 0.00 0.28 0.08 0.08 95.73 09:52:01 2 3.79 0.00 0.32 0.00 0.07 95.82 09:52:01 3 3.98 0.00 0.42 0.00 0.12 95.48 09:53:01 all 0.40 0.00 0.18 0.01 0.08 99.33 09:53:01 0 0.45 0.00 0.17 0.00 0.08 99.30 09:53:01 1 0.40 0.00 0.13 0.05 0.07 99.35 09:53:01 2 0.25 0.00 0.10 0.00 0.07 99.58 09:53:01 3 0.50 0.00 0.30 0.00 0.10 99.10 09:53:01 CPU %user %nice %system %iowait %steal %idle 09:54:01 all 0.41 0.00 0.18 0.01 0.08 99.32 09:54:01 0 0.20 0.00 0.13 0.00 0.08 99.58 09:54:01 1 0.50 0.00 0.20 0.03 0.08 99.18 09:54:01 2 0.28 0.00 0.22 0.00 0.08 99.42 09:54:01 3 0.63 0.00 0.18 0.00 0.08 99.10 09:55:01 all 0.39 0.00 0.21 0.01 0.07 99.31 09:55:01 0 0.42 0.00 0.15 0.00 0.07 99.36 09:55:01 1 0.48 0.00 0.32 0.05 0.10 99.05 09:55:01 2 0.23 0.00 0.15 0.00 0.05 99.57 09:55:01 3 0.42 0.00 0.23 0.00 0.07 99.28 09:56:01 all 48.59 0.00 1.61 0.25 0.11 49.42 09:56:01 0 42.94 0.00 1.37 0.48 0.10 55.11 09:56:01 1 50.56 0.00 1.87 0.52 0.12 46.94 09:56:01 2 50.38 0.00 1.70 0.00 0.12 47.80 09:56:01 3 50.52 0.00 1.52 0.02 0.12 47.83 09:57:01 all 14.70 0.00 0.43 0.01 0.08 84.77 09:57:01 0 14.20 0.00 0.47 0.00 0.08 85.25 09:57:01 1 14.14 0.00 0.48 0.02 0.08 85.27 09:57:01 2 15.08 0.00 0.38 0.02 0.08 84.44 09:57:01 3 15.39 0.00 0.40 0.00 0.08 84.12 09:58:01 all 9.15 0.00 0.40 0.02 0.09 90.34 09:58:01 0 8.94 0.00 0.43 0.00 0.08 90.54 09:58:01 1 8.87 0.00 0.33 0.05 0.08 90.66 09:58:01 2 8.82 0.00 0.53 0.02 0.10 90.53 09:58:01 3 9.97 0.00 0.28 0.02 0.10 89.64 09:59:01 all 5.52 0.00 0.30 0.01 0.07 94.10 09:59:01 0 5.69 0.00 0.27 0.00 0.07 93.97 09:59:01 1 5.09 0.00 0.33 0.02 0.07 94.49 09:59:01 2 5.81 0.00 0.28 0.03 0.07 93.80 09:59:01 3 5.48 0.00 0.33 0.00 0.07 94.12 10:00:01 all 3.49 0.00 0.32 0.01 0.07 96.10 10:00:01 0 3.77 0.00 0.31 0.00 0.07 95.85 10:00:01 1 3.17 0.00 0.30 0.03 0.08 96.41 10:00:01 2 3.04 0.00 0.37 0.02 0.08 96.50 10:00:01 3 3.97 0.00 0.30 0.00 0.07 95.67 10:01:01 all 3.62 0.00 0.32 0.02 0.08 95.96 10:01:01 0 2.55 0.00 0.27 0.00 0.08 97.09 10:01:01 1 3.72 0.00 0.32 0.00 0.08 95.88 10:01:01 2 3.27 0.00 0.40 0.07 0.08 96.17 10:01:01 3 4.91 0.00 0.28 0.00 0.08 94.73 10:02:01 all 2.50 0.00 0.29 0.02 0.07 97.11 10:02:01 0 1.95 0.00 0.27 0.00 0.07 97.72 10:02:01 1 1.92 0.00 0.33 0.00 0.08 97.66 10:02:01 2 2.15 0.00 0.32 0.10 0.08 97.34 10:02:01 3 3.96 0.00 0.25 0.00 0.07 95.73 10:03:01 all 2.99 0.00 0.31 0.01 0.10 96.59 10:03:01 0 2.59 0.00 0.27 0.00 0.08 97.06 10:03:01 1 3.36 0.00 0.25 0.02 0.10 96.27 10:03:01 2 2.77 0.00 0.27 0.03 0.10 96.82 10:03:01 3 3.23 0.00 0.45 0.00 0.10 96.23 10:04:01 all 41.39 0.00 1.51 0.08 0.10 56.92 10:04:01 0 43.84 0.00 1.66 0.05 0.10 54.36 10:04:01 1 39.07 0.00 1.60 0.02 0.08 59.23 10:04:01 2 39.58 0.00 1.53 0.08 0.12 58.70 10:04:01 3 43.05 0.00 1.27 0.17 0.08 55.42 10:04:01 CPU %user %nice %system %iowait %steal %idle 10:05:01 all 23.37 0.00 0.81 0.33 0.09 75.40 10:05:01 0 22.71 0.00 0.59 0.00 0.08 76.62 10:05:01 1 22.67 0.00 0.59 0.02 0.10 76.63 10:05:01 2 24.39 0.00 0.88 1.30 0.08 73.34 10:05:01 3 23.72 0.00 1.19 0.00 0.08 75.01 10:06:01 all 10.46 0.00 0.33 0.01 0.08 89.12 10:06:01 0 9.72 0.00 0.35 0.00 0.08 89.84 10:06:01 1 10.71 0.00 0.23 0.02 0.08 88.95 10:06:01 2 10.37 0.00 0.40 0.02 0.08 89.13 10:06:01 3 11.04 0.00 0.34 0.00 0.08 88.54 10:07:01 all 2.70 0.00 0.14 0.02 0.08 97.07 10:07:01 0 3.27 0.00 0.10 0.00 0.07 96.57 10:07:01 1 2.51 0.00 0.08 0.00 0.07 97.34 10:07:01 2 2.87 0.00 0.22 0.07 0.08 96.77 10:07:01 3 2.15 0.00 0.17 0.00 0.08 97.60 10:08:01 all 3.38 0.00 0.21 0.01 0.08 96.33 10:08:01 0 4.02 0.00 0.28 0.00 0.07 95.63 10:08:01 1 3.07 0.00 0.12 0.02 0.08 96.71 10:08:01 2 3.05 0.00 0.18 0.02 0.07 96.69 10:08:01 3 3.36 0.00 0.25 0.02 0.08 96.29 10:09:01 all 1.89 0.00 0.17 0.02 0.08 97.84 10:09:01 0 2.61 0.00 0.17 0.03 0.08 97.11 10:09:01 1 1.41 0.00 0.17 0.03 0.08 98.31 10:09:01 2 1.79 0.00 0.08 0.02 0.07 98.05 10:09:01 3 1.76 0.00 0.27 0.00 0.07 97.91 10:10:01 all 1.45 0.00 0.14 0.02 0.07 98.32 10:10:01 0 2.83 0.00 0.12 0.03 0.03 96.99 10:10:01 1 1.02 0.00 0.12 0.03 0.08 98.74 10:10:01 2 1.02 0.00 0.27 0.00 0.10 98.61 10:10:01 3 0.90 0.00 0.05 0.00 0.07 98.98 10:11:01 all 2.69 0.00 0.22 0.01 0.08 97.00 10:11:01 0 2.87 0.00 0.15 0.02 0.08 96.88 10:11:01 1 3.06 0.00 0.17 0.02 0.10 96.66 10:11:01 2 2.30 0.00 0.27 0.02 0.07 97.35 10:11:01 3 2.53 0.00 0.28 0.00 0.08 97.10 10:12:01 all 1.29 0.00 0.14 0.01 0.07 98.48 10:12:01 0 1.27 0.00 0.15 0.02 0.07 98.50 10:12:01 1 0.94 0.00 0.17 0.00 0.07 98.83 10:12:01 2 0.69 0.00 0.08 0.03 0.07 99.13 10:12:01 3 2.28 0.00 0.15 0.00 0.08 97.49 10:13:01 all 3.37 0.00 0.21 0.02 0.10 96.31 10:13:01 0 3.03 0.00 0.25 0.05 0.12 96.55 10:13:01 1 3.20 0.00 0.22 0.00 0.10 96.49 10:13:01 2 3.70 0.00 0.13 0.02 0.07 96.09 10:13:01 3 3.54 0.00 0.25 0.00 0.10 96.11 10:14:01 all 39.64 0.00 1.53 0.49 0.09 58.25 10:14:01 0 39.17 0.00 1.59 0.60 0.08 58.55 10:14:01 1 33.11 0.00 1.62 1.13 0.08 64.05 10:14:01 2 42.11 0.00 1.54 0.10 0.10 56.15 10:14:01 3 44.21 0.00 1.36 0.13 0.08 54.21 10:15:01 all 23.41 0.00 0.66 0.65 0.08 75.19 10:15:01 0 22.94 0.00 0.65 0.43 0.07 75.91 10:15:01 1 21.61 0.00 0.83 2.13 0.10 75.32 10:15:01 2 23.81 0.00 0.53 0.03 0.07 75.56 10:15:01 3 25.30 0.00 0.64 0.00 0.08 73.98 10:15:01 CPU %user %nice %system %iowait %steal %idle 10:16:01 all 8.88 0.00 0.68 0.03 0.08 90.33 10:16:01 0 9.03 0.00 0.68 0.00 0.10 90.19 10:16:01 1 8.26 0.00 0.62 0.05 0.07 91.01 10:16:01 2 9.05 0.00 0.80 0.05 0.08 90.02 10:16:01 3 9.19 0.00 0.60 0.02 0.08 90.11 10:17:01 all 48.07 0.00 1.33 0.34 0.10 50.15 10:17:01 0 50.10 0.00 1.34 0.02 0.10 48.44 10:17:01 1 42.00 0.00 1.27 1.35 0.12 55.26 10:17:01 2 50.46 0.00 1.47 0.00 0.10 47.97 10:17:01 3 49.74 0.00 1.24 0.00 0.10 48.92 10:18:01 all 2.76 0.00 0.18 0.03 0.08 96.95 10:18:01 0 2.95 0.00 0.15 0.00 0.08 96.81 10:18:01 1 3.18 0.00 0.25 0.13 0.08 96.35 10:18:01 2 2.33 0.00 0.08 0.00 0.08 97.50 10:18:01 3 2.56 0.00 0.22 0.00 0.08 97.14 10:19:01 all 2.82 0.00 0.21 0.01 0.09 96.87 10:19:01 0 2.29 0.00 0.15 0.00 0.08 97.48 10:19:01 1 4.11 0.00 0.25 0.03 0.10 95.51 10:19:01 2 2.77 0.00 0.22 0.00 0.10 96.91 10:19:01 3 2.12 0.00 0.22 0.00 0.07 97.60 10:20:01 all 1.87 0.00 0.15 0.01 0.08 97.89 10:20:01 0 2.21 0.00 0.13 0.00 0.07 97.59 10:20:01 1 1.75 0.00 0.17 0.05 0.10 97.94 10:20:01 2 1.85 0.00 0.12 0.00 0.08 97.95 10:20:01 3 1.69 0.00 0.17 0.00 0.07 98.08 10:21:01 all 2.55 0.00 0.19 0.01 0.08 97.17 10:21:01 0 2.48 0.00 0.20 0.00 0.07 97.25 10:21:01 1 2.59 0.00 0.25 0.05 0.07 97.04 10:21:01 2 2.41 0.00 0.15 0.00 0.08 97.35 10:21:01 3 2.73 0.00 0.17 0.00 0.08 97.02 10:22:01 all 3.15 0.00 0.21 0.01 0.09 96.54 10:22:01 0 2.81 0.00 0.22 0.00 0.07 96.91 10:22:01 1 2.96 0.00 0.25 0.02 0.12 96.65 10:22:01 2 3.81 0.00 0.18 0.02 0.08 95.90 10:22:01 3 3.02 0.00 0.18 0.00 0.10 96.69 10:23:01 all 3.28 0.00 0.20 0.04 0.09 96.39 10:23:01 0 2.73 0.00 0.23 0.00 0.10 96.94 10:23:01 1 3.30 0.00 0.18 0.07 0.08 96.36 10:23:01 2 3.25 0.00 0.15 0.07 0.08 96.45 10:23:01 3 3.83 0.00 0.25 0.02 0.08 95.82 10:24:01 all 55.26 0.00 1.79 0.27 0.10 42.57 10:24:01 0 59.35 0.00 1.55 0.38 0.12 38.59 10:24:01 1 53.31 0.00 1.91 0.07 0.08 44.63 10:24:01 2 57.01 0.00 1.94 0.05 0.10 40.90 10:24:01 3 51.37 0.00 1.76 0.59 0.12 46.17 10:25:01 all 8.10 0.00 0.39 0.04 0.08 91.38 10:25:01 0 8.23 0.00 0.30 0.00 0.07 91.40 10:25:01 1 8.21 0.00 0.33 0.00 0.10 91.36 10:25:01 2 8.21 0.00 0.58 0.00 0.10 91.11 10:25:01 3 7.76 0.00 0.35 0.15 0.07 91.67 10:26:01 all 10.15 0.00 0.52 0.03 0.10 89.21 10:26:01 0 9.88 0.00 0.50 0.00 0.10 89.53 10:26:01 1 10.36 0.00 0.53 0.00 0.10 89.01 10:26:01 2 9.80 0.00 0.53 0.00 0.08 89.58 10:26:01 3 10.55 0.00 0.50 0.12 0.10 88.74 10:26:01 CPU %user %nice %system %iowait %steal %idle 10:27:01 all 5.28 0.00 0.42 0.01 0.08 94.21 10:27:01 0 5.37 0.00 0.32 0.00 0.08 94.23 10:27:01 1 5.26 0.00 0.47 0.00 0.08 94.19 10:27:01 2 5.43 0.00 0.42 0.00 0.08 94.07 10:27:01 3 5.05 0.00 0.47 0.05 0.08 94.35 10:28:01 all 0.61 0.00 0.19 0.02 0.08 99.11 10:28:01 0 0.87 0.00 0.17 0.03 0.08 98.85 10:28:01 1 0.32 0.00 0.08 0.00 0.05 99.55 10:28:01 2 0.60 0.00 0.18 0.02 0.08 99.12 10:28:01 3 0.63 0.00 0.32 0.03 0.10 98.91 10:29:01 all 0.94 0.00 0.18 0.01 0.09 98.78 10:29:01 0 2.48 0.00 0.21 0.00 0.08 97.22 10:29:01 1 0.43 0.00 0.20 0.00 0.12 99.25 10:29:01 2 0.28 0.00 0.15 0.00 0.07 99.50 10:29:01 3 0.53 0.00 0.15 0.05 0.10 99.17 10:30:01 all 0.88 0.00 0.21 0.01 0.08 98.82 10:30:01 0 2.36 0.00 0.31 0.00 0.10 97.23 10:30:01 1 0.38 0.00 0.20 0.00 0.07 99.35 10:30:01 2 0.35 0.00 0.18 0.00 0.10 99.37 10:30:01 3 0.42 0.00 0.13 0.03 0.07 99.35 10:31:01 all 1.05 0.00 0.22 0.01 0.09 98.64 10:31:01 0 0.67 0.00 0.22 0.00 0.10 99.02 10:31:01 1 0.60 0.00 0.28 0.00 0.08 99.03 10:31:01 2 2.38 0.00 0.20 0.00 0.08 97.34 10:31:01 3 0.52 0.00 0.17 0.03 0.08 99.20 10:32:01 all 0.79 0.00 0.25 0.01 0.08 98.87 10:32:01 0 0.22 0.00 0.15 0.00 0.07 99.57 10:32:01 1 0.30 0.00 0.18 0.00 0.07 99.45 10:32:01 2 2.06 0.00 0.33 0.00 0.10 97.51 10:32:01 3 0.57 0.00 0.33 0.05 0.08 98.97 10:33:01 all 0.41 0.00 0.21 0.01 0.09 99.28 10:33:01 0 0.42 0.00 0.23 0.00 0.08 99.27 10:33:01 1 0.33 0.00 0.10 0.00 0.08 99.48 10:33:01 2 0.64 0.00 0.28 0.00 0.10 98.98 10:33:01 3 0.27 0.00 0.22 0.05 0.08 99.38 10:34:01 all 22.60 0.00 1.50 0.56 0.08 75.26 10:34:01 0 17.14 0.00 0.97 0.84 0.07 80.99 10:34:01 1 16.57 0.00 1.69 0.22 0.07 81.45 10:34:01 2 31.74 0.00 1.72 0.17 0.10 66.27 10:34:01 3 24.96 0.00 1.62 1.00 0.08 72.33 Average: all 17.65 0.17 0.94 1.29 0.09 79.85 Average: 0 17.80 0.20 0.93 0.89 0.09 80.09 Average: 1 17.75 0.14 0.98 1.48 0.09 79.57 Average: 2 17.56 0.19 0.95 1.34 0.09 79.87 Average: 3 17.50 0.17 0.92 1.44 0.09 79.88